Why self-censorship of political speech could be the future normal for AI platforms
AI platforms are walking a thin line, apprehensive about generating responses that could anger leaders on either side of sharp political divides. An overarching generative AI platform that is good at everything is probably not possible, so companies will likely focus on specialisation and creativity.
Gemini is playing safe — returning a standard response, “I'm still learning how to answer this question. In the meantime, try Google Search”, in response to various types of election-related questions. (Photo via Google blog)
As India heads to Lok Sabha elections, Google has said it will restrict the types of election-related questions users can ask its artificial intelligence (AI) chatbot Gemini in the country. “Out of an abundance of caution…we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high-quality information for these types of queries seriously…,” the company said in a blog post recently.
Earlier, Krutrim, the chatbot developed by an Indian AI startup founded by Bhavish Aggarwal of Ola, had been found to self-censor on certain keywords,The Indian Express had reported.
You have exhausted your monthly limit of free stories.
Read more stories for free with an Express account.
These actions spotlight the possibly restricted future of political speech on generative AI platforms, and the potential for censorship in polarised times. Experts say such self-censorship could become the norm as these platforms try to stay on the right side of governments around the world.
How exactly are AI platforms limiting political speech?
AI platforms are walking a thin line, apprehensive about generating responses that could anger leaders on either side of sharp political divides. Companies are trying to balance political correctness with what politicians could deem to be objectionable responses, even if they are not unlawful.
So, Gemini is playing safe — returning a standard response, “I’m still learning how to answer this question. In the meantime, try Google Search”, in response to various types of election-related questions, including who one should vote for, and whether the BJP is better than the Congress.
Earlier, Ola had seemingly applied algorithmic filters to ensure that Krutrim beta did not produce results for queries that included keywords such as Narendra Modi, BJP, and Rahul Gandhi.
In response to similar questions that were posed to Gemini, Krutrim says: “I’m sorry, but my current knowledge is limited on this topic. I’m constantly learning, and I appreciate your understanding. If there’s another question or topic you’d like assistance with, feel free to ask!”
Story continues below this ad
This, a technology expert said, was “code-level censorship”. “Basically these companies have written a code that whenever a user asks a question that contains certain keywords, the platform will not ping the underlying foundational model, which has the potential answer to that question, but return with a predetermined response that it is not able to respond to that particular question,” the expert said.
What is the specific background of Google’s decision?
Google’s AI platform has been under fire in recent weeks over the various responses it has generated.
The company apologised for what it said were “inaccuracies in some historical image generation depictions” after Gemini depicted white figures (such as the founding fathers of the United States) or groups like Nazi-era German soldiers as people of colour — its system was apparently trying to compensate for criticism that AI foundational models may lack diversity.
In India, there was controversy after the tool appeared to give different answers to a similar question on various world leaders, including Prime Minister Narendra Modi and former US President Donald Trump.
Story continues below this ad
Following this, the Centre had considered sending a show-cause notice to the company on why Gemini was producing such responses, and whether Google should be held liable. However, after reports emerged of similar issues with other AI platforms such as India’s own Krutrim while answering political questions, the government chose to instead advise the companies to fine-tune their systems.
Earlier this month, the IT Ministry issued an advisory to intermediaries saying that if they were deploying “untested” or “unreliable” AI systems in India, they would have to get clearance from the government, especially because these systems could pose a threat to electoral democracy.
After the advisory was criticised by stakeholders including startup founders and investors who saw it as regulatory overreach, the Ministry clarified that it would not apply to startups but only to large platforms. However, questions have persisted about the legal basis of the advisory.
Does this mean censorship of political speech will be ‘normal’ on AI platforms?
Executives at companies that have developed some of the most prominent foundational models believe this could indeed be the future — where the platforms could aim more at creativity than at providing factually correct results.
Story continues below this ad
“Generative AI platforms are creative platforms. They can assist you with writing code, drug discovery, creating music, or writing lyrics. They are not platforms where you should seek factually correct news,” a senior executive at a big tech firm said, requesting anonymity given the company’s commercial interests in gen AI.
“You could create specific AI platforms — such as one that excels at giving you computer codes, or one that can be great for something else. But then, these platforms will be pretty bad at everything apart from that specific function. One overarching generative AI platform that can be good at everything is a myth. And news makes it more complex given that opinions on politics are subjective, so you can never please everyone. The easy way is to just limit responses to politics-related queries,” this executive added.
Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More