Former Google CEO Advises AI Startups to Prioritize Growth Over Legal Concerns
Eric Schmidt, former CEO of Google, has stirred controversy with his recent advice to artificial intelligence (AI) startups. During a talk at Stanford’s School of Engineering, Schmidt suggested that companies should focus on rapid growth and virality, even if it means sidestepping legal and ethical considerations.
Schmidt presented a theoretical scenario involving the creation of a TikTok competitor using a large language model (LLM). He advised startups to prioritize product development and user acquisition, stating that legal issues should only be addressed if the product becomes successful.
“You’re better off building the product, getting lots of users, and then dealing with the legal issues later,” Schmidt said. However, he quickly added a contradictory statement, claiming he was not advocating for illegal content theft.
This advice comes at a time when the AI industry is facing scrutiny over its use of human-produced content for training models. Companies like The New York Times have recently called out alleged copyright violations by AI firms.
Schmidt’s comments reflect a broader attitude in Silicon Valley, where innovation often precedes legal compliance. This approach, sometimes referred to as “move fast and break things,” has been a hallmark of tech industry growth strategies.
Following negative press coverage, the video of Schmidt’s talk was removed from public view.
Lawyers With Mops
In an attempt to clarify his stance, Schmidt emphasized that he does not condone illegal content theft. However, his comments highlight the AI industry’s history of scraping human-produced content for training purposes.
Schmidt’s belief that legal issues can be managed post-success raises questions about the role of lawyers in addressing potential intellectual property theft. This approach has broader implications for the AI industry and content creators, as it suggests a “clean up later” mentality.
More on AI and Copyright
The ongoing debate over AI and copyright issues continues to intensify. Microsoft’s CEO of AI has previously stated that using open web content for training is fair use, further complicating the legal landscape.
Schmidt’s comments may have far-reaching consequences for future AI development and legal frameworks. As the industry grapples with these ethical and legal challenges, the balance between innovation and intellectual property rights remains a contentious issue.