Microsoft Sues Team for Alleged Abuse of AI Services
Microsoft has initiated legal proceedings against a group suspected of intentionally developing and employing tools designed to circumvent security measures of its cloud AI offerings. The lawsuit, filed in December 2024 in the U.S. District Court for the Eastern District of Virginia, names ten unnamed defendants who are alleged to have utilized stolen customer credentials and custom software to breach the Azure OpenAI services.
Allegations of Misuse
In the lawsuit, Microsoft contends that the defendants breached the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal extortion laws by illegally accessing and using Microsoft’s software and servers. The intent behind these actions, as alleged, was to create "offensive" and "harmful and illegal content. However, the company did not specify the nature of the abusive content generated.
Microsoft is pursuing an injunction along with "other equitable" relief and damages. The company disclosed in its complaint that it discovered in July 2024 that credentials for the Azure OpenAI service, specifically API keys—which are unique strings used for application or user authentication—were being exploited to generate content that violated the service's acceptable use policy.

Image Source Note: Image generated by AI, licensed by service provider Midjourney
Systematic API Key Theft
The lawsuit states: "The specific manner in which the defendants obtained all API keys used to carry out the improper conduct described in this complaint is unclear, but it appears that the defendants have engaged in a systematic pattern of API key theft, enabling them to steal Microsoft API keys from multiple Microsoft customers."
According to Microsoft, the defendants executed a "hacker-as-a-service" scheme utilizing stolen API keys from U.S. customers of Azure OpenAI services. The complaint details that the defendants created a client tool named de3u and software to manage and route communications from de3u to Microsoft's systems to facilitate this operation.
Features of the De3u Tool
Microsoft claims that the de3u tool allows users to leverage the stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI service customers, without necessitating any coding knowledge. Furthermore, it was reported that de3u attempted to hinder modifications to prompts used for generating images, especially when text prompts included words likely to trigger Microsoft's content filtering mechanisms.
At the time of this report, the repository containing the de3u project code on GitHub, a subsidiary of Microsoft, has been rendered inaccessible.
Court Actions and Security Measures
In a blog post published on Friday, Microsoft announced that the court has authorized the seizure of a website deemed "critical" to the defendants' operations. This action is expected to allow Microsoft to collect evidence, uncover how the defendants purportedly monetize their service, and dismantle any other discovered technical infrastructure.
Additionally, Microsoft noted that it has "taken countermeasures," although specific details were not disclosed. The company has also "added additional security mitigations" for Azure OpenAI services in response to the activities it has observed.
Key Points
- Microsoft has sued a group for allegedly abusing its Azure OpenAI services.
- The lawsuit claims violations of multiple laws, including federal extortion and copyright laws.
- The defendants reportedly developed tools to generate harmful content using stolen API keys.



