Microsoft sues ‘foreign-based’ cyber-crooks, seizes sites used to abuse AI
Microsoft has sued a group of unnamed cybercriminals who developed tools to bypass safety guardrails in its generative AI tools. The tools were used to create harmful content, and access to the tools were sold as a service to other miscreants.
The lawsuit, filed in December in a US District Court, accuses 10 defendants of using API keys stolen from “multiple” Microsoft customers along with custom-designed software to break into computers running Microsoft’s Azure Open AI service.
Microsoft says it uncovered the scheme in July 2024, but the exact way in which the criminals stole the API keys is unknown.
While the legal complaint doesn’t identify any of the 10 defendants, Steven Masada, assistant general counsel for Microsoft’s Digital Crimes Unit, described the criminals as a “foreign-based threat–actor group.”
The lawsuit accuses the 10 of violating the federal laws including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act (RICO), and seeks relief and damages related to the “creation, control, maintenance, trafficking, and ongoing use of illegal computer networks and piratical software to cause harm to Microsoft, its customers, and the public at large” [PDF].
In addition to the complaint, the newly unsealed court documents also include a court order [PDF] allowing Microsoft to seize web domains used in the criminal operation. This, according to Masada, will “allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find.”
After using the stolen customer credentials to break into Azure, the complaint alleges, intruders used this illicit access to “create harmful content in violation of Microsoft’s policies and through circumvention of Microsoft’s technical protective measures.”
Plus, the digital thieves resold this access as a “hacking-as-a-service scheme” to other criminals, the lawsuit claims:
The de3u software, according to the lawsuit, allows users to issue Microsoft API calls to generate images using the DALL-E model, which is available to Azure OpenAI Service customers.
“Using an open source software package, [the] defendants built a web application that implements a custom layout and data flow designed specifically for using tools like DALL-E to generate images using text prompts,” the court documents claim.
Microsoft has since boosted its genAI guardrails and added safety mitigations that it says help prevent this type of abuse. It did not provide specific details about what these new safety measures include. ®
READ MORE HERE