Appeared on Substack on December 21, 2025
https://anandanandalingam649613.substack.com/p/us-is-losing-and-china-is-gaining?r=o7w77
While the current manifestation of AI is truly transformative, it comes with a myriad of societal challenges that need to be acknowledged and dealt with. Otherwise, we will be letting the proverbial genie out of the box and could be dealing with cleaning up dystopian messes. Already we see AI being used to propagate “deep fakes”, both people’s voices and visual likenesses, that, at a minimum, compromise autonomy and at its worse put those impacted in financial and personal jeopardy. Deep fakes are now being part and parcel of political campaigns, especially in the unregulated social media and blogosphere. AI generated fake videos of candidates are becoming pervasive saying or doing things that would disgust and deter potential voters. Deep fakes are permeating the pornography industry and until it comes to the notice of those being “faked”, reputations could end up in the toilet at a rapid pace.
Worse, AI is being used as therapy tools for children and young adults, and with no human intervention could lead to serious outcomes including suicide. A recent study by Schoene and Canca of Northeastern University found that large language models (LLMs) such as OpenAI’s ChatGPT and Perplexity AI could output potentially harmful content leading to self-harm and suicide despite safety features. These AI models are getting more and more sophisticated and are now able “jailbreak”, i.e. circumvent an LLM’s safeguards and manipulate it into generating content it would otherwise withhold.
In addition to the potential to do serious damage to different parts of society, to implement AI, the companies might well have to violate several tenets of civil society that most countries have adhered to for a century or more. Most people value privacy and reject the use of personal information for commercial purposes without permission. Most people demand explanations for decisions made by others that would or could impact their lives, even for the better. For example, even if a doctor’s prescription of a medicine or a procedure that would improve someone’s life, that person almost always wants to know why and how! AI models need to be able to respect privacy and have explainability built into their implementation. Most civic societies would demand that AI follow ethics and values of that society.
The only body that can set standards and regulations for artificial intelligence implementations is the government, federal, state and local. When I asked ChatGPT “what is the overall role of the government?”, it replied “The overall role of government is to provide collective goods, set rules for society, and protect the public interest where markets or individuals alone cannot”. And followed up with “Government exists to do what individuals and markets cannot do well on their own—create order, provide shared goods, manage risks, and promote long-term societal well-being.” Clearly, ChatGPT knows that the government should safeguard the citizens from harm! We desperately need coordinated Global AI regulations.
So what are countries doing?
A recent law co-authored by Senators Amy Klobuchar and Ted Cruz called Take It Down Act allows victims to remove intimate images – both real and AI created deepfake – published without their consent. Of course, the “victim” needs to first be able to spot this incursion and also have the technology wherewithal to have the images removed. It does not really deal with privacy and security issues that are endemic in AI. Several states have proposed, and many have passed laws restricting the free flow of AI. Tennessee, for example, passed the ELVIS law that gives artists control over their AI generated digital replicas. Over 25 states have laws to rein in deceptive political deepfakes. Some states like Utah require companies to let people know when they are interacting with an AI chatbot.
President Trump wants the federal government to be the one only one enacting laws regarding artificial intelligence. In January 2025, Trump revoked an executive order aimed at ensuring AI safety. He has positioned his administration as pro-industry suggesting in a social-media post that it would add a clause to a federal bill to prevent states from regulating AI. Given how close he and his administration are to the leaders of the Tech companies, there is widespread skepticism that any federal law passed by the United States government would have any teeth. First, eight of the largest tech, AI, and social media companies spent a combined $36 million on federal lobbying during the first half of 2025 — an average of roughly $320,000 per day that Congress has been in session. Second, the person who has been appointed the White House AI Czar, David Sacks, is deeply embedded in the crypto and AI industries. He has spent his time trying to eliminate or undermine state regulations of AI calling them “woke laws passed by Blue states”. While the U.S. under Trump is trying to stymie state regulations that has been proposed or implemented, ironically China seems to be leading the charge for a global regulatory regime for AI governance.
In October, at a meeting of the Asia-Pacific Economic Cooperation forum, Chinese President Xi Jinping proposed the creation of the World Artificial Intelligence Cooperation Organization (WAICO), which would bring nations together as a step towards creating a global governance system for AI. China was among the first nations to introduce AI-specific regulations, beginning in 2022, and has wide-ranging rules on harmful content, privacy and data security and other potentially detrimental outcomes of AI. Developers of public-facing AI-powered services must let Chinese regulators test their systems ahead of deployment. The result is that models such as those developed by the Hangzhou-based company DeepSeek are among the most regulated in the world. Of course, several critics of China claim, with some justification, that the Chinese government is concerned that AI in the hands of ordinary citizens could pose a threat to the primacy of the state.
India has also been concerned about the potential risks of using AI but wants its IT sector to become more of a player in developing AI. Just in November 2025, the Ministry of Electronics & IT in India proposed several guidelines for “democratizing” the benefits of AI. Under the heading of “Seven Sutras”, principles were articulated as follows: Trust; People First; Innovation over Restraint; Fairness & Equity; Accountability; Understandable by Design; Safety, Resilience & Sustainability, with “Six Pillars” for the successful deployment of AI: Infrastructure; Capacity Building; Policy & Regulation; Risk Mitigation; Accountability; Institutions. The Institutional architecture would include an AI Governance Group (AIGG), supported by a Technology & Policy Expert Committee (TPEC); AI Safety Institute (AISI) for testing, standards, and safety R&D. There was also a commitment to try and integrate AI with digital platforms and to also modernize legal frameworks to deal with potential negative impacts of AI.
Long before China, India or the U.S. became concerned about the potential negative impacts of AI, Europe was in the forefront of thinking about and discussing these. Europe has built AI regulation incrementally—starting from ethical principles and data protection law (for example GDPR – General Data Protection Regulation which has now become a standard worldwide), evolving through coordinated strategic plans, and culminating in the AI Act, the first comprehensive, risk-based legal framework for AI in the world. This framework continues to be refined and implemented across sectors and member states. The European Union approach has been to classify AI systems by risk level, with different rules relating to transparency and oversight at each tier, and obligations that came into force in August 2025, aimed at the most powerful AI systems. There is of course no way for the European Union to enforce these rules and regulation through some kind of supranational enforcement body. The UK, for example, after agreeing to the EU AI Act has recently shelved plans to introduce comprehensive AI legislation until next year, at the earliest.
Many hurdles lie in the way to creating a binding intergovernmental agreement on AI, but some advocates say it is possible, comparing the technology to other risky but useful endeavors for which agreements exist, such as nuclear power and aviation. China’s proposed WAICO would be a way for countries to coordinate AI governance rules while “fully respecting the differences in national policies and practices”. China has proposed that the body’s headquarters be in Shanghai; this would not be acceptable to most influential countries in the world, and certainly not to the U.S. and India, and perhaps not even to most European countries. Given the hostility of the Trump administration to most countries in the world, witness the recent recalling of Ambassadors to several countries in the Global South, the Europeans not all pulling in the same direction, and India just getting started, China could, by default, become much more influential in AI deployment and regulation.