Opinion Why Meta, Google and Microsoft want you to be afraid of AI
Generative AI tools are far less sophisticated than the tech world is claiming. The fear of them acquiring super-intelligence is just a distraction from the actual harms caused by the deployment of AI systems today.

A couple of months ago, Google CEO Sundar Pichai, in an interview with CBS’s 60 Minutes, admitted he didn’t “fully understand” how the company’s AI chatbot Bard works. He said the programme spoke in a foreign language that it wasn’t trained to know. The “revelation” instantly grabbed big, bold headlines, triggered numerous opinion pieces, and intensified discussions about AI.
But soon after, experts such as Margaret Mitchell, a computer scientist and former employee of Google, shed light on the fact that Bard was trained to give prompts in the language concerned. At the centre of the controversy was Bengali. All she did was look at the datasheet of PaLM, Google’s large language model (LLM), which had been incorporated into the chatbot, to find Bengali as one of the foreign languages that it already knew.
Okay, @60Minutes is saying that Google’s Bard model “spoke in a foreign language it was never trained to know.” I looked into what this can mean, and it appears to be a lie. Here’s the evidence, curious what others found. 🧵 https://t.co/u3WtvbOtAM
— MMitchell (@mmitchell_ai) April 17, 2023
https://platform.twitter.com/widgets.js
With her expertise, some digging and a healthy dose of scepticism, Mitchell quickly deflated Pichai’s assertion that Bard had some sort of “superpower” to automatically learn unexpected new skills. As it turned out, the whole episode was yet another attempt to perpetuate AI hype, which came into centre stage in November last year with the launch of ChatGPT.
The forces behind fuelling this hype are major tech companies like Google, Meta and Microsoft. In recent months, they have left no stone unturned in convincing people that generative AI tools like Bard and ChatGPT are soon going to usher in an AI revolution. They believe such software will transform computer programming, healthcare, marketing, transportation, journalism and even therapy.
Their tall claims haven’t fallen on deaf ears. They have, in fact, got many researchers and entrepreneurs worried as they fear that the generative AI tools will quickly become “superhumanly smart” — replacing the equivalent of millions of full-time jobs, taking over the world, and risking nuclear war. Such proclamations, however, ring hollow.
The technology that tech companies are positioning as something radically innovative has been around for decades. For instance, AI created quite a buzz back in the 1990s, when the Deep Blue computer programme defeated chess world champion and grandmaster Gary Kasparov. Subsequently, it was used in humanoid robots, driverless cars and then in smartphones. Therefore, software like ChatGPT isn’t a “research breakthrough” but a product that uses several years old technology as per researcher Michael Timothy Bennett’s article, published recently by The Conversation, ‘No, AI probably won’t kill us all — and there’s more to this fear campaign than meets the eye’.
Bennett adds that generative AI tools and their byproducts are far from posing an imminent existential threat. Their underpinning models are slow learners and need vast data to do what humans can with only a few examples. Essentially, the new AI products available in the market are just refined versions of the technology we have been using for years now.
So why are tech companies elevating AI hype? Why are they labelling everything as “artificial intelligence” now? It’s an oversimplification but the reason is profit.
The tech industry hasn’t witnessed a true technological breakthrough in the last two decades. Companies have been waiting for the next big thing after the rise of personal computers, the advent of the internet and then mobile phones. But under capitalism, standing still isn’t an option — businesses have to either grow or die. And that’s why in recent years, we have seen tech companies repeatedly build hype around new technologies, calling them disruptive in a bid to jack up their share price and lure investors for more capital.
More often than not, these much-hyped products prove to be duds. Take the example of self-driving cars. The frenzy around them reached such heights at one point that Elon Musk, in 2014, said all Tesla cars would be fully AI-run autopilot vehicles by the following year. Uber, also in 2014, announced it would replace the people who drive its cars with cars that drive themselves. Spoiler alert! None of the aforementioned prophecies came true.
Conversations around self-driving cars more or less fizzled out. Tesla is now facing lawsuits as its shareholders have accused Musk and his company of “overstating the effectiveness and safety of their electric vehicles’ autopilot and full self-driving technologies,” The Guardian reported in February.
Another example is Metaverse. In 2021, Facebook owner Mark Zuckerberg changed the name of his trillion-dollar company to Meta and launched brand-new technology. It instantly became the tech world’s obsession. There were claims that in no time people would begin to hang out in the video-game-like world of Metaverse. Two years later, it is “now headed to the tech industry’s graveyard of failed ideas”, Business Insider wrote in its report, adding that Zuckerberg has even stopped discussing it with his advertisers. Other companies, like Microsoft, which had set up Metaverse divisions, have mostly shut them down, firing hundreds of staff.
More instances of such hyped innovations include Web 3, augmented reality and, the most recent one, cryptocurrency.
There is another way to look at the issue though — through the Gartner hype cycle. It is a graphical presentation devised by American research and consulting firm Gartner. According to it, every technology goes through five key phases: The breakthrough, the “peak inflated expectation”, the period when interest in it waves away, the time when second or third-generation products are introduced by service providers, and “mainstream adoption starts to take off”.
The graph reaches its zenith at the “peak inflated expectation” phase. It’s the time when, let’s say, blockchain becomes so popular that Grimes, a Canadian musician, sells $6 million worth of digital art as NFTs. The next stage, disillusionment, is where Bitcoin is today, around 50 per cent down from its all-time high. AI is currently at the pinnacle of the Gartner hype cycle.
Regardless of how one examines the hype around AI, maintaining a healthy dose of scepticism is essential. Generative AI tools are far less sophisticated than the tech world is claiming them to be. The fear of them acquiring superintelligence is just a distraction from the actual harms caused by the deployment of AI systems today.
A recent statement released by Distributed AI Research Institute, founded by Timnit Gebru, listed the ongoing harms of such software: Workers’ exploitation, data theft and risk of concentration of power among a few people. It concluded by saying there is an urgent need to act against the “very real and very present exploitative practices of companies” building generative AI tools rather than being paranoid about their imaginary apocalyptic risk.