Image Not FoundImage Not Found

  • Home
  • Featured
  • Microsoft Sues Cybercriminals for AI Misuse: Deepfake Porn and Safety Breaches Exposed
Microsoft Sues Cybercriminals for AI Misuse: Deepfake Porn and Safety Breaches Exposed

Microsoft Sues Cybercriminals for AI Misuse: Deepfake Porn and Safety Breaches Exposed

Microsoft Names Developers in AI Misuse Lawsuit

Microsoft has amended its lawsuit to specifically name four developers accused of bypassing safety measures in its artificial intelligence (AI) tools. The tech giant alleges these individuals are part of a cybercrime network known as Storm-2139, which is involved in creating harmful content, including deepfake celebrity pornography.

The defendants, identified by their online nicknames, are Arian Yadegarnia (“Fiz”) from Iran, Alan Krysiak (“Drago”) from the UK, Ricky Yuen (“cg-dot”) from Hong Kong, and Phát Phùng Tấn (“Asakuri”) from Vietnam.

According to Microsoft, Storm-2139 operates with a three-tiered structure: creators, providers, and users. Creators develop illicit tools that exploit AI services, providers modify and distribute these tools, often offering different service tiers, while users employ the tools to generate illegal synthetic content, primarily focusing on celebrities and sexual imagery.

The lawsuit, initially filed with anonymous defendants, has been updated following new evidence. Microsoft’s legal action aims to halt the defendants’ activities, dismantle their operations, and deter future misuse of AI technology.

This move by Microsoft represents a significant step in protecting its AI technology from being used for harmful purposes. The lawsuit has already disrupted within Storm-2139, with reports of members turning against each other.

The case highlights the ongoing challenges in regulating AI safety and preventing misuse. Companies like Microsoft face a complex landscape in balancing AI development with preventing its abuse. While some companies, such as Meta, opt for open-source AI models, Microsoft employs a mixed approach, keeping some AI models private while others are public.

Despite these efforts, criminals continue to find ways to exploit AI technology, emphasizing the need for effective enforcement of protective measures. The lawsuit underscores the ongoing struggle to manage AI safety in a largely self-regulated industry and highlights the importance of both technological and legal systems in preventing AI misuse.

As the case progresses, it will likely set important precedents for how tech companies can legally pursue those who misuse their AI technologies, potentially shaping the future of AI regulation and enforcement.

Image Not Found

Discover More

Nintendo Switch 2: Game-Key Cards Revolutionize Digital and Physical Game Sales
Trending Now: From Baseball Bats to AI - How Tech, Entertainment, and Lifestyle Intersect
From Corporate Grind to Island Paradise: American Couple's Thai Business Adventure
Personal Loan Rates 2023: How Credit Scores Impact Your Borrowing Power
Tesla's Autopilot Under Fire: Motorcycle Deaths Spark Safety Concerns and Regulatory Debate
Crypto Scams Surge: Experts Urge Caution as Losses Hit Billions in 2022
Tech Founder's False Shooting Claim Exposes Startup Culture Pressures
Luxury Watch Giants Unveil Stunning Timepieces at Watches and Wonders 2025 Amid Economic Uncertainty
Air Force One Overhaul Delayed: Trump Turns to Elon Musk as Boeing Struggles with Billion-Dollar Losses