LinkedIn has been scraping users' personal data to train AI models without making necessary changes to its privacy policy at the outset. The Microsoft-owned platform only recently updated its terms of service to reflect its data harvesting process, according to a report by 404 Media. “On September 18, 2024, we added examples and other details to our Privacy Policy to clarify how we use personal data to develop and provide AI-powered services and share data with our affiliates, and to provide additional links to information that may be relevant to individuals in certain regions… and shared additional upcoming updates to our User Agreement, including more details on our generative AI features and how we recommend and moderate content and updates to our license,” the company said in a blog post. Notably, LinkedIn users are signed up by default to permit AI training on their posts and personal data. The scraped data will be used to train LinkedIn’s generative AI models developed to provide writing suggestions and post recommendations. AI models developed by parent firm Microsoft could also be trained on the scraped user data, according to the Q&A page on the platform. The professional networking platform has made it possible for users to opt out of the data use through a toggle button in their profile settings. However, opting out will not affect the AI training that has already occurred. LinkedIn has refrained from scraping user data for AI training in countries belonging to the European Union and European Economic Area (EEA) as well as Switzerland, possibly due to data protection regulations there. Besides LinkedIn, other social media giants such as Meta and Snap have also faced criticism for training their AI models on user data. Meta’s privacy policy director recently told Australian lawmakers that the tech giant scrapes photos and posts made public by users aged over 18 without obtaining their explicit consent. Privacy concerns regarding Meta’s training of AI models using publicly available data have also been raised in India. The Software Freedom Law Centre, a Delhi-based digital rights advocacy group, penned a letter to the Union Ministry of Information and Technology stating, “It is important to ensure that the possibility of harm arising to Indian users is minimised with the automated processing of personal and non-personal data by Meta AI.” Platforms such as Reddit and Stack Overflow have also struck data licensing agreements with AI companies that allow their models to be trained on vast troves of user-generated content despite the objections raised by users.