For the past five years or so, the European Union (EU) has been implementing various digital laws and regulations, prompting big tech companies such as Google, Apple, and Meta to revamp their existing policies to meet legal obligations. Most recently, Apple decided to delay the launch of its AI-powered features in Europe to figure out how to comply with the interoperability requirements set out by the EU’s Digital Markets Act. With more EU tech regulations coming into play, we help you get better acquainted with them. GDPR The EU’s General Data Protection Regulation (GDPR) is considered a landmark piece of legislation that has prompted several other countries to bring in their own versions of a data protection law. The GDPR was passed by the European Parliament in 2016 and came into effect in 2018, giving companies two years to comply with the law. It looks to empower users by giving them more control of their personal data that is defined as “any piece of information that is related to an identifiable person.” This means that an EU citizen’s name, email address, IP address, home address, location data, and health data is considered as personal data. Besides organisations operating within the EU, the GDPR also covers those entities that are located in non-EU countries but cater to customers residing within the international bloc. Under the GDPR, EU citizens have the right to access their personal data stored by companies and know how it is being used. They also have the right to be forgotten, which means that companies can be told to delete users’ personal data. EU citizens are further empowered to obtain their personal data and easily share it with other service providers. The GDPR requires companies covered under the act to obtain clear and informed consent of EU citizens before collecting or processing their personal data. Once given, user consent can also be withdrawn, as per the law. Parents or those with parental responsibility must give their consent in order for companies to collect childrens’ data i.e. the personal data belonging to an individual under 16. In the event of a data breach, companies have less than 72 hours to notify the data protection authority about the cybersecurity incident. Customers whose data has been breached should also be informed by the companies “without undue delay.” Organisations that fail to comply with the provisions of the GDPR could face fines that amount to 4 per cent of their global annual turnover or 20 million euros, whichever is the bigger number. In May 2023, Facebook parent company Meta was hit with a whopping 1.2 billion euro fine after a probe found that the cross-border transfer of EU users’ personal information to the US was unlawful. One of the obligations under the GDPR is that businesses are required to securely store user data within the borders of the EU. Cross-border data transfers are allowed only to non-EU countries whose data protection regime is considered to be adequate by EU authorities. At the time when Meta was slapped with a fine, the EU and US reportedly did not have a working agreement in place for cross-border data transfers. Digital Markets Act In order to ensure that the markets in the digital sector are fair and competitive, the European Parliament passed the Digital Markets Act (DMA) in October 2022 and from May 2023 onwards, the provisions of the Act became enforceable on large digital companies with gatekeeping powers that were identified as ‘gatekeepers’. Online search engines, app stores, and messenger services fall under the definition of a gatekeeper, as per the law. Under the DMA, EU citizens have the right to install the apps that they prefer by either downloading them directly from the web or looking for them on alternative app stores. Users can also uninstall any pre-installed apps that come with the device. Users will be able to select the browser or search engine that they want, which means that developers are obligated to present users with more than one option on the device’s screen known as a choice screen. The DMA also guarantees EU citizens streamlined access to platforms, data ownership, seamless data portability, and unbiased search results. Third-party cookies which allow companies to track users’ activity outside of their own websites are not allowed unless the user grants their effective consent. The DMA proposes to level the playing field between gatekeepers and other businesses by allowing the distribution of apps through alternative app stores or using unofficial methods (also known as sideloading). Notably, as per the act, gatekeepers like Apple, Google, Meta, etc, are required to ensure interoperability between their services free of charge. This possibly means that WhatsApp may be required to make its app work together with the app of another business but only in “certain specific situations.” Companies that do not comply with the DMA will face consequences such as hefty fines worth 10 or 20 per cent of their annual global turnover. However, in case of repeated infringements and as a last resort option, the DMA empowers EU regulators to restructure or break up the business of the gatekeeper. Recently, Apple became the first gatekeeper that was found to have violated provisions of the DMA in connection with its App Store policies that prevent developers from steering customers towards alternative purchase offers. Digital Services Act Several provisions of the EU’s Digital Services Act (DSA) echo the intermediary obligations stipulated in India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, as both of them look to foster a safe online environment for users by cracking down on unlawful content, ensuring greater transparency, and strengthening user grievance redressal. The European Parliament passed the DSA in July 2022 and it became applicable to all online intermediary platforms from February 2024 onwards. Under the DSA, various online players are classified into four categories, namely: Intermediary services, hosting services, online platforms, and very large online platforms. Several tech giants such as Facebook, Instagram, Snapchat, Amazon, Google Maps, Google Play Store, Apple App Store, Booking.com, Wikipedia, etc, have been designated as very large online platforms (VLOPs) since they have more than 45 million users. The DSA requires online platforms to set up mechanisms for the takedown of unlawful content while also giving users more options to flag such content. It also prohibits platforms from serving targeted ads to users based on their sexual orientation, religion, ethnicity, or political beliefs. The DSA requires online services and platforms to publish annual transparency reports on their content moderation activities such as number of content pieces that were taken down, number of content pieces flagged by automated systems, number of accounts suspended, number of government blocking orders and court orders received, and other details. To ensure algorithmic transparency, all platforms are legally obliged to explain the main parameters that are used to recommend content to users. Moreover, the DSA has certain requirements specifically for VLOPs such as establishing a point of contact for users and authorities, letting users opt out of recommendation systems, drawing up measures to address possible crisis situations, maintaining a public ads library, and subjecting their systems to independent audits. Notably, VLOPs are required to “share their data with the Commission and national authorities so that they can monitor and assess compliance with the DSA.” Under the Act, the European Commission and a digital services coordinator (DSC) have the power to make very large online platforms take “immediate actions where necessary to address very serious harms.” Fines up to 6 per cent of the annual global turnover could be levied against online platforms that are not in compliance with the DSA, and those platforms that repeatedly violate the law could even get temporarily banned within the EU. The EU AI Act First proposed in 2021, the final version of the EU AI Act was only approved in May 2024 and will become fully enforceable two years from now. It is one of the first major laws to be released amid the boom in generative AI tools like ChatGPT. The AI Act takes a risk-based approach to regulating AI systems, meaning that those systems that pose more risk to users will have to comply with more requirements. The different risk levels are: unacceptable risk, high risk, limited risk, and minimal risk. The rationale behind this approach is to prevent the harmful outcomes of AI systems without hindering AI innovation. Most user-centric generative AI tools like ChatGPT or Gemini are not designated as being high risk. But the companies developing such tools have to make sure that their AI models do not generate illegal content. They also have to clearly label AI-generated content such as deepfakes and share summaries of the copyrighted data used to train their AI models. Generative AI models that are more sophisticated such as OpenAI’s GPT-4 will need to be thoroughly examined for any systemic risks, as per the law. AI systems or products that pose an unacceptable risk by encouraging dangerous behaviour or scoring people based on their socio-economic background are banned under the AI Act. Facial recognition and biometric identification systems are also banned but the legislation makes some exceptions for law enforcement purposes. Meanwhile, high risk AI systems such as autonomous vehicles, medical devices, and systems used for profiling have to undergo initial risk assessment before being rolled out. These AI systems also need to retain automatically generated logs and have an inbuilt kill switch that brings the AI system to a halt. The EU proposes different amounts of fines as penalties to be imposed on AI providers, deployers, importers, distributors, and other bodies that have found to be in non-compliance with the AI Act. In an attempt to foster AI innovation and growth, the legislation requires national authorities to set up a testing environment for startups and small businesses so that they can train and test their AI model before publicly launching it.