Until recently, the AI landscape seemed dominated by American companies such as OpenAI, Google, Meta and Anthropic, all headquartered in California, US. However, in January, Chinese company DeepSeek released a large language model that matched its American counterparts in terms of capabilities at a fraction of their cost and resources. For US policymakers, the writing on the wall was clear: China would be its major challenger in the AI game. On February 25, the Office of Science and Technology Policy (OSTP) under the United States government called for suggestions to draft an AI action plan aimed at “securing and advancing American AI dominance”. The responses to this call — particularly from tech companies — reflect how AI has become the new battleground for geopolitics. They also raise concerns about the safety risks associated with the technology going forward. There are ramifications for India, too, as it aggressively pushes AI development through the IndiaAI mission, which has allocated Rs 10,300 crore over five years for strengthening AI capabilities. How have companies responded? Big tech companies have called for overhauls within the US national security and innovation framework. In its letter, OpenAI spoke of competition not with Chinese tech companies but with the Chinese Communist Party (CCP), which has expressed ambitions of becoming a global leader in AI by 2030. OpenAI said the AI Action Plan could ensure “American-led AI built on democratic principles continues to prevail over Chinese Communist Party-built autocratic, authoritarian AI”. American companies have long pointed out that China’s powerful CCP-led government can exercise control over private companies if it wants to. This has also been cited in recent calls for a TikTok ban in the US. Saying there is “the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm”, OpenAI called for lesser restrictions on copyrighted material. AI models essentially learn patterns from heaps of data to generate their responses. A lot of this data is scraped from the internet and throws up questions of copyright infringement. The New York Times has sued OpenAI and Microsoft for copyright violation. News agency Asian News International (ANI) has sued OpenAI on the same grounds in India. OpenAI said it aligned “with the core objectives of copyright and the fair use doctrine”, claiming its models are “using existing works to create something wholly new and different without eroding the commercial value of those existing works”. It added that with DeepSeek, “America’s lead on frontier AI is far from guaranteed”. “Given concerted state support for critical industries and infrastructure projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to data—including copyrighted data—that will improve their models. If the PRC’s developers have unfettered access to data and American companies are left without fair use access to data, the race for AI is effectively over,” OpenAI argued. Google also highlighted access to data in its submission: “Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances.” Google and OpenAI have hinted at creating an overarching federal framework that could govern AI development. States like California have recently enacted legislation to regulate the technology and set safety standards. And what have companies said about the safety aspect? While AI-associated risks have been flagged in public conversations, they did not find notable mention in tech companies’ suggestions. The word “safety” appears just once in the submissions from OpenAI and Google and is absent in Meta’s, according to The Platformer newsletter by tech writer Casey Newton. Google said AI policymaking has “paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership”. However, it said this was “beginning to shift under the new (Trump) administration”. Right after coming to power, US President Donald Trump signed an executive order that rolled back his predecessor Joe Biden’s guardrails for the technology. Trump said this was necessary to “develop AI systems that are free from ideological bias or engineered social agendas”. Then, at the AI Action Summit in Paris in February, the US did not sign an international pledge on inclusive and sustainable AI. US Vice President JD Vance said he was there to talk of “AI opportunity” as opposed to risk and added excessive regulation could “kill a transformative industry”. The US government also plans to invest $100 billion in data centres for AI development. What does this pro-innovation approach mean? Essentially, the conversation is shifting from risk to growth. Balancing the two has been at the heart of discussions about how AI and its associated tools are developed. In 2023, OpenAI CEO Sam Altman testified before the US Congress about the risks of AI, which he said could go “quite wrong”. That year, Tesla founder Elon Musk said everyone should take a six-month pause on AI, but Musk is now aligned with the Trump administration in pushing AI development through his company xAI. Tech companies “are really emboldened by the Trump administration, and even issues like safety and responsible A.I. have disappeared completely from their concerns,” Laura Caroli, a senior fellow at the Wadhwani AI Center at the Center for Strategic and International Studies, told The New York Times. Such an approach would impact how the nascent technology ultimately shapes up.