Nowhere is the AI debate more polarised than between the evangelists who see technology as humanity’s next great leap and the sceptics who warn of its profound limitations. Two recent pieces — Sam Altman’s characteristically bullish blog and Apple’s quietly devastating research paper, “The Illusion of Thinking” — offer a fascinating window into this divide. As we stand at the threshold of a new technological era, it’s worth asking: What should we truly fear, and what is mere hype? And for a country like India, what path does wisdom suggest?
But then comes Apple’s “The Illusion of Thinking”, a paper that lands like a bucket of cold water on AI enthusiasm. Apple’s researchers conducted a series of controlled experiments, pitting state-of-the-art large language models (LLMs) against classic logic puzzles. The results drove down the enthusiasm around Artificial General Intelligence (AGI). While these models impressed at low and medium complexity, their performance collapsed as the puzzles grew harder. AI is not truly “thinking” but merely extending patterns. When faced with problems that require genuine reasoning, there are still gaps to be filled. Apple’s work is a much-needed correction to the narrative that we are on the verge of achieving AGI.
So, who is right? The answer, as is often the case, lies somewhere in between. Altman’s optimism is not entirely misplaced. AI has already transformed industries and will continue to do so, especially in domains where pattern recognition and data synthesis are of utmost use. But Apple’s critique exposes a fundamental flaw in the current trajectory: The flaw of conflating statistical abilities with genuine understanding or reasoning. There is a world of difference between a machine that can predict the next word in a sentence and one that can reason its way through the Tower of Hanoi or make sense of a complex, real-world dilemma.
What, then, should the world be afraid of? The real danger is not that AI will suddenly become superintelligent and take over, but that we will place too much trust in systems whose limitations are poorly understood. Imagine deploying these models in healthcare, infrastructure, or governance, only to discover that their intelligence isn’t truly that. The risk is not Skynet, but systemic failure born of misplaced faith. Billions could be wasted chasing the chimera of AGI, while urgent, solvable problems are neglected. There is often waste in innovation processes. But the scale of resources deployed for AI dwarfs other examples, and hence, demands a different sort of caution.
Yet, there are also fears we can safely discard. The existential risk posed by current AI models is, for now, more science fiction than science. These systems are powerful, but they are not autonomous agents plotting humanity’s downfall. They are tools — impressive, but fundamentally limited. The real threat, as yet, is not malicious machines, but human hubris.
Are there any lessons for India to draw from this? The country stands to gain enormously from AI, particularly in areas like language translation, agriculture, public service delivery, and others. Here, based on the strengths of today’s AI — pattern recognition, automation, and data analysis — it can be used to address real-world, local challenges, which is what India has been majorly trying to do. But India must resist the temptation to tag along with the AGI hype. Instead, it should invest in human-in-the-loop systems, where AI aids rather than replaces human judgement, especially in domains where discretion levels are high at the point of contact with people, and where the stakes are high. Human judgement is still ahead of AI, as of now, so, stick to using it.
There is also a deeper lesson here, that is imparted by control theory. True control — over machines, systems, or societies — requires the ability to adapt, to reason, to respond dynamically to feedback. Current AI models, for all their power, lack this flexibility. They cannot adjust their approach when complexity exceeds their training. More data and more computing do not solve this problem. In this sense, the illusion of AI control is as dangerous as the illusion of AI thinking.
The future will be shaped neither by those who are blind in their faith towards AI, nor by those who see only limits, but by those who can navigate the space between. For India, and for the world, the challenge is to harness the real strengths of AI while remaining clear-eyed about its weaknesses. The true danger is not that machines will outthink us, but that we will stop thinking for ourselves. Related to this was an interesting brain scan study by MIT Media Lab of ChatGPT users, which suggested that AI isn’t making us more productive. It could instead be harming us cognitively. This is what we need to worry about, at least for now.
The writer is research analyst at The Takshashila Institution in their High-Technology Geopolitics Programme