Meta Faces Scrutiny Over AI-Powered Ad Moderation Amid Drug Sales Concerns
Meta CEO Mark Zuckerberg testified before the Senate Judiciary Committee on Wednesday, addressing concerns about online child safety and the company’s use of artificial intelligence in ad moderation. The hearing, titled “Big Tech and the Online Child Sexual Exploitation Crisis,” saw Zuckerberg promise that AI would revolutionize Meta’s ad services.
However, Meta’s AI-powered ad moderation practices have come under fire as a bipartisan group of lawmakers accused the company of allowing advertisements promoting illicit drug sales on its platforms. Representatives Tim Walberg and Kathy Castor led the group in sending a letter to Zuckerberg, following a Wall Street Journal report on federal prosecutors investigating Meta for facilitating illegal drug transactions.
During the Senate hearing, Zuckerberg faced intense questioning about safety measures for children on Meta’s platforms. In a notable moment, he apologized to families affected by social media use.
Adding to the controversy, the Tech Transparency Project, a nonprofit watchdog group, reported that Meta profits from ads openly displaying and promoting illegal drugs. This revelation contradicts Meta’s stated policy prohibiting such content.
A Meta spokesperson responded to these allegations, stating that their systems proactively detect and enforce against violating content, with hundreds of thousands of ads rejected for breaching drug policies. However, the spokesperson did not address the specific role of AI in ad moderation.
Meta’s ad approval and moderation processes remain partially opaque, with the company relying on a combination of automated technology and human reviewers. The company continues to push for further automation of the review process.
These challenges come amid a broader rollout of AI-powered services at Meta, which has faced setbacks including the discontinuation of celebrity AI assistants and issues with its AI chatbot and assistant.
The concerns surrounding Meta’s AI implementation reflect a wider trend in the tech industry. An Arize AI survey revealed that 56% of Fortune 500 companies view AI as a risk factor, with 86% of technology groups, including Salesforce, identifying AI as a business risk.
Meta’s own 2023 annual report acknowledged significant risks in developing and deploying AI, expressing uncertainty about AI’s ability to enhance products, services, or business profitability. As tech companies continue to push for AI implementation, the balance between innovation and responsible deployment remains a critical challenge for the industry.