Premium

EU bans AI tools used for social scoring, predictive policing under landmark legislation

However, critics have pointed out that the regulatory framework for AI has several exemptions for European police and migration authorities.

Erasmus MundusCompanies that are found to have violated the AI Act could incur hefty penalties. (File photo)

The first set of restrictions under the European Union’s landmark AI Act went into effect from Sunday, February 2 onwards. This means that AI systems that are deemed as ‘unacceptable risk’ under the legislation are now illegal in countries within the bloc.

short article insert The following categories of AI systems have now been banned under the legislation as they are considered to be “a clear threat to the safety, livelihoods and rights of people”:

– Social scoring systems
– Emotion recognition AI systems in workplaces and education institutions
– Individual criminal offence risk assessment or prediction tools
– Harmful AI-based manipulation and deception tools
– Harmful AI-based tools to exploit vulnerabilities

Story continues below this ad

Practices such as the untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; biometric categorisation to deduce certain protected characteristics; and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces have also been banned.

However, critics have pointed out that the AI Act has several exemptions allowing European police and migration authorities to use AI for tracking terror attack suspects.

Legal obligations to ensure sufficient technology literacy among staff is also one of the provisions of the AI Act that came into force after Sunday.

Companies who fail to comply with the AI Act could face fines of around 35 million euros ($35.8 million) or 7 per cent of their global annual revenues (whichever amount is higher), according to a report by CNBC.

Story continues below this ad

The first-of-its-kind regulatory framework for AI was officially rolled out in August last year. However, multiple provisions of the law are being implemented in phases. For instance, the governance rules and obligations for tech companies that develop general-purpose AI models will come into force from August 2, 2025, according to the official website.

General-purpose AI (GPAI) models refer to large language models or LLMs such as OpenAI’s GPT series. Companies that develop high-risk AI systems for use cases in critical sectors such as education, medicine, and transport, have an extended transition period up till August 2, 2027.

Technology on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.

Latest Comment
Post Comment
Read Comments