AI Industry Faces Potential Stagnation as Technological Advancements Slow
Recent concerns have emerged in the tech industry regarding the stagnation of AI improvements, with generative AI models reportedly facing limitations in technological advancements. Experts warn that scaling up large language models (LLMs) is no longer yielding significant gains, potentially leading to a slowdown in the field’s progress.
Gary Marcus, a cognitive scientist and AI skeptic, has raised alarm bells about a potential industry crash. He argues that the high valuations of companies like OpenAI and Microsoft are based on unrealistic assumptions about achieving artificial general intelligence through scaling, which he believes is economically unsustainable.
Reports suggest that OpenAI’s new model, Orion, shows less improvement over GPT-4 compared to GPT-4’s advancement over GPT-3. In some areas, such as coding, improvements may be negligible. Ilya Sutskever, a prominent figure in the AI community, has noted that scaling up AI models has reached a plateau, challenging the long-held belief that “bigger is better” for AI models.
The industry is grappling with the high costs of additional training and scaling. Training large models requires significant resources and time, and the sector is running out of freely available data for model training. Marcus predicts that LLMs will become commodities, leading to price wars and low revenue.
In response to these challenges, OpenAI researchers are exploring new methods to overcome scaling issues. Techniques like “test-time compute” are being developed to enhance AI reasoning capabilities. However, the long-term viability of these new approaches remains uncertain.
As the AI industry faces profitability challenges, there is a risk of another AI winter if significant improvements are not achieved swiftly. The coming months will be crucial in determining whether the sector can overcome these hurdles and continue its rapid pace of innovation.