Dear Aspirants, UPSC Prelims 2024 season has begun, and we are sure you want to have an Express Edge. To ensure your preparations have that extra edge, take a look at the essential concepts, terms, and phenomena from the static and current parts of the UPSC-CSE in our UPSC Essentials’ One word a day. Also don’t miss Point to Ponder and Post Read MCQ which will help you to self-evaluate your retention memory after reading the article. Word: Deepfakes Subject: Science and Technology, AI, social issues WHY IN NEWS? — A video that supposedly shows actress Rashmika Mandanna entering an elevator has ignited a firestorm of controversy on the internet. What initially appears as genuine is, in fact, a ‘deepfake’ of the actress. The original video features a British Indian girl, and her face was morphed to insert Mandana’s face instead. — Responding to the video, Rajeev Chandrasekhar, the Union Minister for Electronics & Technology, said on the social media platform X that deep fakes are the latest and a “more dangerous and damaging form of misinformation” that need to be dealt with by social media platforms. He also cited the legal obligations of social media platforms and IT rules pertaining to digital deception. — This particular clip highlights that the problems of deepfake technology are most certainly expected to be bigger for women, for whom online platforms are already a hostile place. Deepfakes add a new dimension to the ways in which they can be harassed on the internet. KEY TAKEAWAYS — Deepfakes constitute fake content — often in the form of videos but also other media formats such as pictures or audio — created using powerful artificial intelligence tools. — Simply, it is an amalgamation of the words “deep learning” and “fake” and it means fabricated videos generated from existing face-swapping techniques and technology. — They are called deepfakes because they use deep learning technology, a branch of machine learning that applies neural net simulation to massive data sets, to create fake content. — It employs a branch of artificial intelligence where if a computer is fed enough data, it can generate fakes which behave much like a real person. — The origin of the word “deepfake” can be traced back to 2017 when a Reddit user, with the username “deepfakes”, posted explicit videos of celebrities. What is the Centre's advisory to social media platforms over deepfakes? — The Ministry of Electronics and IT (MeitY) has sent advisories to social media platforms, including Facebook, Instagram and YouTube, to take down misleading content generated through artificial intelligence – deepfakes – within 24 hours. — As per government sources, the advisory has reiterated existing legal provisions that platforms have to follow as online intermediaries. It has mentioned Section 66D of the Information Technology Act, which entails punishment for cheating by personation by using computer resources with imprisonment up to three years and fine up to Rs 1 lakh. — The advisory is also understood to have mentioned Rule 3(2)(b) of the Information Technology Rules, under which social media platforms are required to take down content in the nature of impersonation, including artificially morphed images of an individual, within 24 hours of the receipt of a complaint. — In February, the IT ministry issued advisories to the chief compliance officers of various social media platforms after it received reports regarding the potential use of AI-generated deepfakes that were manipulating people by generating doctored content. — The Centre is also looking to invoke a controversial law that would require WhatsApp to share details about the first originator of a message on account of rising AI-led misinformation on the messaging platform, The Indian Express had earlier reported. — The basis for this is multiple deepfake videos of politicians circulating on WhatsApp, and the government is understood to be in the process of sending an order to the messaging company under the Information Technology (IT) Rules, 2021, seeking the identity of the people who first shared the videos on the platform. JUST FYI How such deepfakes can be spotted? — Deepfake videos often exhibit unnatural eye movements or gaze patterns. — Deepfake creators may have difficulty replicating accurate colour tones and lighting conditions. — Deepfake videos often use AI-generated audio that may have subtle imperfections. — Deepfakes can sometimes result in unnatural body shapes or movements. — Deepfake software may not always accurately replicate genuine facial expressions. — Deepfakes may struggle to maintain a natural posture or physique. View this post on Instagram A post shared by The Indian Express (@indianexpress) Ankita Deshkar of The Indian Express writes: Apart from the above observations, you can also take a screenshot of the video and run a reverse image search to check the source and the original video. To do this, go to and click on the camera icon that says ‘Search by image’. You can then upload the screenshot and Google will show you if visuals associated with it are taken from previous videos. Point to ponder To comprehend why the global and local regulation of deepfake technology must be expedited, one needs to delve deeper into the consequences of the misuse of deepfakes and its societal implications. Discuss. Thought Process: In their Opinion piece for The Indian Express, Isha Prakash (research fellow at Vidhi Centre for Legal Policy) and Anusha Shah (graduate from Government Law College, Mumbai) write -"with deepfakes getting better and more alarming, seeing is no longer believing". Here are some useful takeaways for your exams: The technology involved in creating deepfakes holds promise for various domains, including entertainment, education and healthcare. However, one must also acknowledge the associated risks, particularly the alarming threat it poses to the personal security and privacy of millions through audio-visual manipulation tactics. This includes the usage of deepfakes for purposes of identity theft and synthetic pornography. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos featuring celebrities or personal acquaintances. Another equally worrying ramification is the creation and dissemination of morphed videos of elected representatives and public figures in a political sphere already reeling from an avalanche of disinformation and polarisation. Combating the challenge posed by the unregulated use of deepfakes requires an amalgamation of technological innovations and legislative solutions. The law does not evolve as quickly as technology does. However, certain jurisdictions, for instance, the European Union, have tried to keep up. The EU updated its Code of Practice on Disinformation to counter the spread of disinformation via deepfakes, including provisions penalising organisations such as Meta for up to 6 per cent of their annual global turnover, if found non-compliant. In India, sections of the Information Technology Act 2000, criminalise the publication and transmission of intimate photos of any person without their consent and deal with the obligations of intermediaries. Provisions of the Copyright Act 1957, concerning the doctrine of fair dealing and right to integrity can be applied. Furthermore, deepfakes directly violate the fundamental right to privacy under Article 21 of the Constitution. If effectively implemented, privacy laws such as the new Digital Personal Data Protection Bill could be the most effective means of regulating deepfakes in India. AI and market-driven solutions will shape deepfake regulation. Facebook’s Deepfake Detection Challenge aimed towards encouraging and incentivising innovation in this regard, is a positive step forward. Operation Minerva uses technology to compare and detect deepfakes by cross-referencing with their catalogue of digitally fingerprinted videos, alerting users if a potentially doctored version of the existing media is detected. It has become apparent that collaborative effort is indispensable. Nina Schick, author and expert in Generative AI believes that technologists, domain-specific experts, policy officials and lawmakers must all come together to combat this misuse of deepfakes. Post Read MCQ: With reference to deepfakes, consider the following statements: 1. Deepfakes directly violate the fundamental right to privacy under Article 21 of the Constitution. 2. Section 66D of the Information Technology Act, which entails punishment for cheating by personation by using computer resources with imprisonment up to three years and fine up to Rs 1 lakh but this does not include deepfakes. 3. It is called deepfakes because they use deep learning technology, a branch of machine learning that applies neural net simulation to massive data sets, to create fake content. Which of the above statements is/are not correct? (a) Only 1 (b) Only 2 and 3 (c) Only 2 (d) Only 1 and 2 Post your answer in the comment box. Share your views and suggestions in the comment box or at manas.srivastava@indianexpress.com