Artificial Intelligence has touched various facets of human life. Today, it has gone beyond the concerns of job losses and entered a new realm. The rising tide of generative AI has sparked major concerns about its potential global impact on democratic processes such as elections. The Centre for the Study of Democratic Institutions (CSDI), a Canadian think tank, has published a report that dives deep into some of the risks posed by Gen AI on elections and other democratic processes. The report titled “Harmful Hallucinations: Generative AI and Election,” focuses on the impact of AI on the integrity of elections. The report, authored by Chris Tenove, Nishtha Gupta, Netheena Mathews, and others, throws light on the risks and opportunities of Gen AI. 2024 has been earmarked as the year of “Deepfake Elections” owing to countless incidents in the past few months. Several Gen AI technologies have been put to use in election campaigns across the US, India, and the European Union. According to the study, although Gen AI is not entirely new, the ease of accessibility and rapid improvements in AI tools have significantly lowered barriers to creating deceptive content such as AI-generated misinformation, manipulated media, and even deepfakes. “Generative AI technologies lower the cost of producing deceptive content, and in doing so, they amplify existing threats to democracy,” said Chris Tenove, assistant director of CSDI. AI has amplified existing issues rather than creating entirely new ones, he observed. The CSDI has been studying the impact of various technologies on democratic institutions for several years. With 2024 seeing polls in the US, India, Brazil, and more nations, the team of researchers wanted to study the potential harmful uses of Gen AI in elections. "There is a lot of hype around generative AI and a lot of doomsaying around the potential impacts it might have on politics and elections. So, we wanted to assess the real evidence that we could find about the types of harmful uses that might be in play, and get a sense of what impacts they would have and identify solutions to those threats," Tenove told indianexpress.com. According to the report, the risks posed by Gen AI have been categorised into three primary areas - deception, harassment, and pollution of information environments. The report demonstrates these risks with numerous real-world examples that show how Gen AI can be used to mislead voters, harass political candidates, or even overwhelm people with low-quality inaccurate content. Deception This is one of the most alarming aspects of Gen AI's ability to sabotage the integrity of elections. We have seen Gen AI is capable of creating highly realistic deepfakes that seem convincing enough to mislead or sway voters. Earlier this year, a deepfake of Joe Biden went viral in New Hampshire. The deepfake that spread via robocalls saw the President asking people to save their votes for general elections rather than participating in the primaries. Based on the report, such a tactic was deployed to suppress voter turnout. Closer home, in India, ahead of the General Elections, AI-generated videos of Bollywood actors criticising PM Modi and advocating for his political opponents surfaced. By the time these videos were flagged as deepfakes, they were widely shared and had misled thousands. Harassment Gen AI's capacity to amplify targeted harassment of political candidates was another key area underscored in the report. Mathews cited an incident where over 400 doctored images of women from across political parties were featured on a fake pornography website ahead of the UK elections. In India, there have been "reports of AI experts or AI content generation companies receiving numerous requests to create explicit, deep fakes or superimposed images of politicians," according to Mathews. This trend has raised some serious concerns about the ethical boundaries of AI. Matthews said that the emotional and psychological impact of such harassment can be far-reaching regardless of whether the target is an active political figure or not. Polluting the information environment Based on the report, the most pervasive aspect of Gen AI is its ability to flood the information ecosystem with misleading and factually incorrect content. In some cases, AI chatbots programmed to provide election-related information were found to output incorrect results. In the European Union elections of 2024, Microsoft’s CoPilot reportedly gave inaccurate election data one-third of the time. The sheer volume of AI-generated content, be it intentional misinformation or accidental errors, can make it difficult for people to discern fact from fiction. "AI has complicated the information environment and political discourse by making it harder to access reliable information quickly. We now see cases where people dismiss true images or information as AI-generated. On the other hand, genuine offenders can deny offensive content about them, claiming it’s a deepfake or AI-generated,” said Gupta. Even though the negative impacts dominate news feeds, Gen AI can have certain positive impacts on elections and other democratic processes. The authors cited the use of Bhashini, developed under the Indian government's National Language Technology mission, that allowed Prime Minister Narendra Modi to reach out to citizens in different languages. Besides, AI systems can moderate online debates to encourage fruitful discussions, tools to summarise policy documents, and real-time language for political speeches. Regulatory approaches When asked if countries need to rush into framing regulations around Gen AI and elections, Tenove cautioned against rushing to create new laws specifically for AI. "I would be hesitant to rush to develop rules for two reasons. One, we know that regulation of election communication is a way that governments, parties, individuals in power, try to maintain power. And so we want to have regulations that are really conscious of freedom of expression and fair participation in elections." Tenove added that the complexity of the issue makes it difficult to quickly develop effective regulations. Instead, he suggested that "governments should commit to nuanced, perhaps bold policies that are attentive to the existing frameworks to protect elections." On the other hand, Mathews emphasised the need for forward-looking legislation given how rapidly digital technologies evolve. She highlighted the importance of enforcing existing rules, citing recent issues in India where "the existing rules weren't being enforced in a way that they should have been." Gupta agreed by saying, "I don't think rushing into AI specific legislation is going to do much, because the core issues (of misinformation) . predates AI. AI is just like you can say the latest update, a latest software update, to this long existing problem." The report calls for a balanced approach to regulating Gen AI with respect to elections. The authors also warned against rushing into forming stringent laws without fully understanding their implications.“While we need to act quickly to address the risks posed by GenAI, we also need to ensure that regulations do not stifle innovation or infringe on freedom of expression,” Gupta explained. In order to mitigate challenges posed by Gen AI, the study suggests a multi-stakeholder approach with collaboration between AI service providers, journalists, and governments. The authors also highlighted transparency and accountability as crucial steps.