© IE Online Media Services Pvt Ltd
Latest Comment
Post Comment
Read Comments
Microsoft said various counter-measures have been put in place. (Image credit: Microsoft Azure)A group of cybercriminals used one of Microsoft’s generative AI services to create offensive and harmful content after bypassing safety guardrails.
The software giant filed a lawsuit against ten unknown individuals in the US District Court for the Eastern District of Virginia in December 2024. According to the legal complaint, the cybercriminals were a foreign-based threat–actor group who allegedly stole customer credentials and used custom-designed software to gain unauthorised access to Microsoft’s Azure OpenAI service.
Azure OpenAI lets businesses integrate OpenAI’s tools like ChatGPT and DALL-E into their own cloud apps. Microsoft also uses this AI service to power GitHub Copilot, which is a subscription-based AI coding assistant that suggests lines of code to the software developer platform’s users.
The threat actors were able to get their hands on the customer credentials of Azure OpenAI accounts by scraping public websites. “In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services,” Microsoft said in a blog post on January 10.
“Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content,” it added.
It is unclear what type of abusive content was generated by the threat actors using Azure OpenAI, except that the AI-generated content was in violation of Microsoft’s policies.
“Defendants knowingly and intentionally accessed the Azure OpenAl Service protected computers without authorization, and as a result of such conduct caused damage and loss,” the company said in its complaint.
The complaint argues that the nameless defendants have violated US laws such as the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering law. It is seeking injunctive relief along with “other equitable” relief and damages.
Additionally, Microsoft said that the court has allowed it to seize a website that was “instrumental to the criminal operation” and will allow it to “gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find.”
It also said that various counter-measures and additional safety mitigations have been put in place to safeguard Azure OpenAI following the hacking incident.