Google, the omnipresent search engine that most of us rely on for everything from dinner recipes to existential queries at 3 a.m., has been caught in a scandal involving AI-generated deepfake nudes. According to groundbreaking reporting from 404 Media, Google has been profiting from paid advertisements that promote AI apps capable of producing nonconsensual deepfake nudes. The issue came to light when searches for terms like “Undress apps” and “Best deepfake nudes” presented paid ads for websites offering services like “NSFW AI Image Generator.” These tools can create explicit images of real people without their consent, raising severe ethical and legal questions.
Google has been under fire for years for its apparent inability—or unwillingness—to curb the proliferation of AI deepfakes in its search results. Historically, it has been ridiculously easy to find such disturbing content with just a one-search-query-and-one-click-away effort. In response to the growing criticism, Google announced last week that it would expand existing search policies to help those affected by non-consensual sexually explicit fake content appearing on its search pages. However, 404’s revelation uncovers a more insidious layer: Google’s deepfake problem also infects its advertising side, where the tech giant is actively making money from promoted posts that advertise the very AI services enabling the creation of invasive and nonconsensual content.
In reaction to 404’s findings, Google has taken steps to delist the specific ads and websites flagged by the journalists. A spokesperson for Google stated that services designed to create synthetic sexual or nude content are prohibited from advertising through any of Google’s platforms or generating revenue through Google Ads. Furthermore, the spokesperson assured that the company is actively investigating the issue and plans to permanently suspend any advertisers who violate their policy, removing all their ads from Google’s platforms. However, the spokesperson notably sidestepped questions regarding why these advertisers were allowed to promote links for terms like “Undress app” in the first place, which seems to be a critical part of the problem.
The accessibility of AI deepfake tools has brought about a disturbing surge of non-consensual fake nudes, particularly affecting school systems and law enforcement. Middle and high schools are now grappling with how to police such activities, highlighting a new, troubling dimension to the misuse of AI technology. The creation and dissemination of these images can have devastating effects on victims, leading to severe emotional and psychological distress.
As for Google, the search engine many of us consider a digital oracle, it has a long road ahead in mitigating the rising tide of deepfakes on the web. The issue is not just about removing the content but also about addressing the root causes that make such tools and images so easily accessible. Until Google takes comprehensive measures to tackle this multifaceted problem, the onus remains on the tech giant to ensure that it is not inadvertently facilitating the creation and spread of harmful, non-consensual content.