Opinion There is no such thing as ‘ethical AI’ — don’t buy into corporate campaigns
Misplaced assumptions about what the technology is and what it is not, only serve to reinforce the narratives of ‘AI being too complex’ for lawmakers. This benefits the few global companies that control the massive infrastructures that run AI

In a recent article (‘Intelligence, as we don’t know it’, IE, March 2), the writer talks about developments in the field of artificial intelligence. However, like many others, the article suffers from an over-reliance on simplistic assumptions about the revolutionary potential of this technology.
The term “artificial intelligence” needs to be unpacked by critiquing the assumptions that are commonly made when engaging with this field of study. “Intelligence” can mean many things since there are numerous definitions of the word — from Howard Gardner’s theory of multiple types of intelligences, to the three-stratum theory of cognitive ability, to the Stanford-Binet intelligence scale. Merely calling a machine “intelligent”, therefore, tells us almost nothing. Meanwhile, nothing about AI is “artificial” because the very functioning of the technology is rooted in real-world political and environmental processes. Thus, as critical tech scholar Kate Crawford explains, AI is neither artificial nor intelligent. To rely on 1950s-era definitions of AI in the 2020s would be a mistake.
Recently, the Center on Privacy and Technology at Georgetown University Law Center, Washington DC, announced that they would stop using the terms “artificial intelligence,” “AI,” and “machine learning”, because words matter. Vague terminology conceals more than it reveals. Adopting specific terminology such as “large language models” brings more clarity. Clarity in terminology is especially valuable for law and policy on technology — which should not be framed with assumptions of “intelligence” — be it in the present or future. The same goes for anthropomorphising technology. Framing law and policy based on an assumption of a machine’s “intelligence”, and without critical analyses of the underlying technology involved, be it autonomous cars or facial recognition systems, is likely to have grave consequences for the communities most likely affected by these technologies.
Misplaced assumptions about what the technology is and what it is not, only serve to reinforce the narratives of “AI being too complex” for lawmakers. This benefits the few global companies that control the massive infrastructures that are used for “AI”, who can then argue for “self-regulation” for “AI” (which effectively means no regulation). Popular discussion about “AI” continues to centre these companies instead of those who bear the brunt of the actual impacts of “AI” on everyday life — from the poorest sections of society who are denied food and welfare by the algorithms as reportedly happened in Telangana, to the exploited tech workers tasked with “cleaning” datasets. The article solely focuses on the “profit generating journey” of these companies, their share prices, their market capitalisations, and the clothes their CEOs wear — instead of educating readers about the harms these very companies are unleashing on the planet in the name of “AI”. Everything being seen and understood in terms of “the market” and corporate incentives is one of the many unfortunate trends of this neoliberal era. More telling is the statement that legal regulation must not hamper “innovations” — at no stage is there a consideration of whether it is worthwhile for lawmakers to even allow these “innovations” to continue, given the scale of harm that has already been done — deepfakes, disinformation, devaluation of art, the degradation of the internet, and much more.
Meanwhile, calling for the “innovations” to stop is not easy. In 2020, some AI researchers at Google questioned the ever-widening scale of natural language processing systems — pointing out that building larger and larger AI models would not be sustainable in the long run due to the resources needed and the potential to entrench systemic biases and injustices. Most importantly, they suggested that future research would be more useful if it pursued narrower, more achievable goals, instead of aiming to be the first to build a hypothetical “general” AI. They were then fired by Google. The Gemini issue, mentioned but not satisfactorily delved into in the article, is important because of what it actually does reveal – that companies will refuse to acknowledge that biases are inherent to the workings of the system; that there is a refusal to acknowledge that this problem will never be solved; and that these refusals will be met with half-baked technological “solutions” that will cause even more problems.
Unfortunately, even the arguments against “AI” have been co-opted by the industry itself, pushing legitimate voices to the margins. AI researcher and scholar Anupam Guha explains (‘An AI for the people’, IE, December 29, 2023) how companies have weaponised the real anxieties about AI (as in the case of the “letter” that made headlines last year) to distract from concrete interventions.
Effective policy must confront the uncomfortable, unspeakable facts about AI. One, more data does not equal better “AI”; two, that AI and socio-political realities are intertwined; three, that there is no such thing as “ethical AI”; four, that most of today’s “AI” is simply a method of flattening and erasing the complexity of the real world into statistical data; and finally, that “AI” should not and does not need to implemented in every place.
Meanwhile, generative AI largely continues to be unprofitable for AI startups as they are relying on “the future” while depending on Big Tech companies like Microsoft and Amazon for funding, who are investing billions in data centres around the world. OpenAI’s CEO, Sam Altman, asking for trillions of dollars and hoping for an energy breakthrough tells us about the environmental costs of AI. The AI industry also has a huge water footprint, which does not find mention. A Cornell study has predicted that the global AI demand may be accountable for 4.2–6.6 billion cubic metres of water withdrawal in 2027. Excessive water consumption for AI systems is especially disturbing when areas are struggling with water scarcity. Often, these companies are reluctant to reveal just how many natural resources are consumed at their data centres. While many companies have pledged to be “water positive” and “carbon negative” by 2030, it might be too late by then. Rather than unfounded hype, misplaced excitement and passive acceptance of the latest “innovations” and “revolutions”, we need popular writing on technology that promotes an attitude of scepticism towards corporate campaigns, awareness and critical inquiry, to help build a fairer and more equitable future.
The writer is PhD Scholar at the NALSAR University of Law, Hyderabad