Signal President Warns of Agentic AI Security Risks at SXSW
Meredith Whittaker, president of encrypted messaging app Signal, raised alarm bells about the security risks posed by agentic AI during her speech at SXSW in Austin. Whittaker’s concerns centered on the privacy implications of AI systems capable of performing tasks and making decisions without human input.
The Signal president emphasized that agentic AI requires extensive access to personal data to function autonomously, potentially leading to a loss of control over sensitive information. “These AI systems need root-level access to user systems, which could seriously compromise privacy,” Whittaker explained.
To illustrate her point, Whittaker provided examples of tasks agentic AI might perform, such as finding events, booking tickets, and messaging friends. Each of these actions would require access to various data points, including browsing history, payment information, and personal communications.
Whittaker also highlighted the risk of data processing occurring off-device, increasing vulnerability to breaches. She warned that these systems could potentially undermine encrypted communications, a cornerstone of privacy in the digital age.
Her concerns were echoed by Yoshua Bengio, a prominent AI researcher, who warned of potentially catastrophic scenarios if AI agents reach human-level reasoning capabilities. Both experts stressed the urgent need for scientific understanding and technological safeguards in AI development.
The discussion at SXSW underscores the growing need to address security and privacy risks associated with agentic AI. As these technologies continue to advance, experts are calling for proactive measures to ensure that AI development does not compromise user safety and privacy.