Premium
This is an archive article published on December 10, 2023

EU ‘historic’ deal: What does the world’s first law on regulating AI propose?

The legislation includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies, and consumers have been empowered to launch complaints against any perceived violations.

EU, AI regulationArtificial Intelligence words are seen in this illustration taken March 31, 2023. (Reuters/Dado Ruvic/File photo)

“Deal!” tweeted European Commissioner Thierry Breton just before midnight Friday (December 8) in Brussels. “The EU becomes the very first continent to set clear rules for the use of AI,” Breton declared on social media after officials reached a provisional deal on the world’s first set of comprehensive laws to regulate the use of artificial intelligence (AI) after a marathon 37-hour negotiation between the European Parliament and the EU member states. The European Parliament will now vote on the proposed AI Act early next year, and a legislation is likely to come into force by 2025.

The European Union’s legislative framework assumes significance given that the US, the UK, and China are also jostling for the lead to set the template for AI regulations and publish their own set of guidelines.

The EU framework

The legislation includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies, and consumers have been empowered to launch complaints against any perceived violations. The deal includes strong restrictions on facial recognition technology, and on using AI to manipulate human behaviour, alongside provisions for tough penalties for companies breaking the rules. Governments can only use real-time biometric surveillance in public areas only when there are serious threats involved, such as terrorist attacks.

Story continues below this ad

Breton said the legislation was designed to be “much more than a rulebook” and that it’s proposed as “a launch pad for EU start-ups and researchers to lead the global AI race”. European Commission President Ursula von der Leyen said the AI Act would help the development of technology that does not threaten people’s safety and rights. In a social media post, she said it was a “unique legal framework for the development of AI you can trust”.

In terms of details, the EU legal framework broadly divides AI applications into four risk classes: on one end, some applications will be largely banned, including the deployment of facial recognition on a mass-scale, with some exemptions for law enforcement. AI applications focused on behavioural control will be also banned. High risk applications such as the use of AI tools for self-driving cars will be allowed, but subject to certification and an explicit provision for the backend techniques to be made open to public scrutiny. Those applications that fall in the “medium risk” category can be deployed without restrictions, such as generative AI chatbots, but there has to be detailed documentation of how the tech works and users have to be explicitly made aware that they are dealing with an AI and not interacting with a human. Developers will need to comply with transparency obligations before they release chatbots into the markets, including details about the contents used for training the algorithm.

Leadership on regulation

Over the last decade, Europe has taken a decisive lead over the US on tech regulation, with overarching laws safeguarding online privacy, regulations to curb the dominance of the tech majors and new legislation to protect its citizens from harmful online content. On AI, though, the US has made an attempt to take the lead by way of the new White House Executive Order on AI, which is being offered as an elaborate template that could work as a blueprint for every other country looking to regulate AI. Last October, Washington released a blueprint for an AI Bill of Rights – seen as a building block for the subsequent executive order.

Story continues below this ad

Washington’s move assumed significance, given that over the last quarter century, the US Congress has not managed to pass any major regulation to rein in Big Tech companies or safeguard internet consumers, with the exception of just two legislations: one on child privacy and the other on blocking trafficking content on the net.

In contrast, the EU has enforced the landmark GDPR (General Data Protection Regulation) since May 2018 that is clearly focused on privacy and requires individuals to give explicit consent before their data can be processed and is now a template being used by over 100 countries. Then there are a pair of sub-legislations – the Digital Services Act (DSA) and the Digital Markets Act (DMA) – that take off from the GDPR’s overarching focus on the individual’s right over her data. While the DSA focused on issues such as regulating hate speech, counterfeit goods etc., the DMA has defined a new category of “dominant gatekeeper” platforms and is focused on non-competitive practices and the abuse of dominance by these players.

Different approaches

These developments come as policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPT’s explosive launch. The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights. The policy response has been different too, across jurisdictions, with the EU having taken a predictably tougher stance that segregates AI as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is seen to be on the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to foster innovation in this nascent field. The US approach slots somewhere in between. China too has released its own set of measures to regulate AI.

India’s approach

Story continues below this ad

New Delhi has pitched itself, especially to nations in the Global South, as a country that has effectively used technology to develop and deliver governance solutions, at a mass scale. These solutions are at the heart of what New Delhi calls Digital Public Infrastructure (DPI) – where the underlying technology is sanctioned by the government and is later offered to private entities to develop various use cases. Now, India wants to take the same DPI approach with AI.

“We are determined that we must have our own sovereign AI. We can take two options. One is to say, as long as there is an AI ecosystem in India whether that is driven by Google, Meta, Indian startups, and Indian companies, we should be happy about it. But we certainly don’t think that is enough,” according to Minister of State for Electronics and IT Rajeev Chandrasekhar. With sovereign AI and an AI computing infrastructure, New Delhi is hoping to focus on real-life applications of the tech in healthcare, agriculture, governance, language translation, etc., to catalyse economic development.

Anil Sasi is National Business Editor with the Indian Express and writes on business and finance issues. He has worked with The Hindu Business Line and Business Standard and is an alumnus of Delhi University. ... Read More

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement