Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Microsoft accuses the group of developing tools to abuse its AI service in a new lawsuit


Microsoft has taken legal action against a group the company says intentionally developed and used tools to bypass the security guardrails of its cloud AI products.

According to a complaint filed by the company in December in the U.S. District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and specially designed software to break into Azure OpenAI ServiceMicrosoft’s fully managed service powered by ChatGPT and OpenAI creator technologies.

In the complaint, Microsoft accuses the defendants — referred to only as “Does,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal law racket for illegally accessing and using Microsoft software. and servers for the purpose of “creating offensive content” and “harmful and illegal content”. Microsoft did not provide specific details about the abusive content that was generated.

The company is seeking relief and injunctive damages and “other equity.”

In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI Service credentials — specifically API keys, the unique strings of characters used to authenticate an app or a user — were being used to generate content that violates the service acceptable use policy. Later, through an investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint.

“The precise manner in which the Defendants obtained all of the API Keys used to commit the misconduct described in this Complaint is unknown,” Microsoft’s complaint says, “but it appears that the Defendants engaged in a pattern of systematic Key theft API that allowed them to steal Microsoft API Keys from many Microsoft customers.”

Microsoft says the defendants used stolen Azure OpenAI Service API keys belonging to US-based customers to create a “piracy-as-a-service” scheme. According to the complaint, to pull off this scheme, the defendants created a client tool called de3u, as well as software to process and route communications from de3u to Microsoft systems.

De3u allowed users to exploit stolen API keys to generate images using DALL-Eone of the OpenAI models available to customers of Azure OpenAI Service, without having to write their own code, Microsoft adds. De3u also tried to prevent the Azure OpenAI service from revising the suggestions used to generate images, according to the complaint, which can happen, for example, when a text prompt contains words that trigger Microsoft’s content filter.

De3u Microsoft process
A screenshot of the De3u tool from the Microsoft complaint.Image credits:Microsoft

A repo containing the code of the de3u project, hosted on GitHub – a company that Microsoft owns – is no longer accessible at press time.

“These features, combined with Attorney’s illegal programmatic API access to the Azure OpenAI service, allowed Attorney to reverse-engineer the means to circumvent Microsoft’s content and abuse measures,” the complaint says. “The defendants knowingly and intentionally accessed the protected computers of Azure OpenAl Service without authorization, and as a result of such conduct caused damages and losses.”

In a blog post published Friday, Microsoft says the court has authorized it to seize a website “instrumental” to the operation of the defendants that will allow the company to collect evidence, decipher how the alleged services of the defendants are monetized, and disrupt any additional technical infrastructure it finds. .

Microsoft also says it has “put in place countermeasures,” which the company did not specify, and “added additional security mitigations” to the Azure OpenAI service targeting the activity it observed.



Source link