The Big Tech is betting big on artificial intelligence (AI) and spending billions as leading players look to avoid falling behind in the race.
At the same time, doubts are rising if it’s all worth the hype.
There is now a growing narrative in the Silicon Valley and Wall Street: The breakthroughs from large AI models may be slowing down.
Since the frenzied launch of ChatGPT two years ago, AI believers have maintained that improvements in generative AI would accelerate exponentially.
The reasoning was that delivering on the technology’s promise was simply a matter of resources; pour in enough computing power and data, and artificial general intelligence (AGI) would emerge.
AGI typically refers to hypothetical AI systems that would match or exceed humans on many intellectual tasks.
The capital expenditures of the four largest internet and software companies — Amazon.com, Microsoft, Meta Platforms and Alphabet — are set to total well over $200bn this year, according to a recent Bloomberg report.
Executives from each company have also warned investors that their splurge will continue next year, or even ramp up.
The spree underscores the extreme costs and resources consumed from the worldwide boom in AI ignited by the arrival of ChatGPT.
However, there appears to be problems on the road to AGI.
Industry insiders are beginning to acknowledge that large language models (LLMs) aren’t scaling endlessly higher at breakneck speed when pumped with more power and data.
One fundamental challenge is the finite amount of language-based data available for AI training.
According to Scott Stevenson, CEO of AI legal tasks firm Spellbook, who works with OpenAI and other providers, relying on language data alone for scaling is destined to hit a wall.
Sasha Luccioni, researcher and AI lead at startup Hugging Face, argues a stall in progress was predictable given companies’ focus on size rather than purpose in model development.
The AI industry contests these interpretations, maintaining that progress toward human-level AI is unpredictable.
“There is no wall,” OpenAI CEO Sam Altman posted recently on X, without elaboration.
Nevertheless, OpenAI has delayed the release of the awaited successor to GPT-4, the model that powers ChatGPT, because its increase in capability is below expectations, according to sources quoted by The Information.
Now, the company is focusing on using its existing capabilities more efficiently.
After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models, according to Bloomberg.
At Alphabet’s Google, an upcoming iteration of its Gemini software is not living up to internal expectations, according to three people with knowledge of the matter.
US-based Anthropic, meanwhile, has seen the timetable slip for the release of its long-awaited Claude model called 3.5 Opus.
The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems.
The recent setbacks also raise doubts about the heavy investment in AI and the feasibility of reaching an overarching goal these companies are aggressively pursuing: AGI
The chief executives of OpenAI and Anthropic have previously said AGI may be only several years away.
Still, AI companies continue to pursue a more-is-better playbook. In their quest to build products that approach the level of human intelligence, tech firms are increasing the amount of computing power, data and time they use to train new models — and driving up costs in the process. As costs rise, so do the stakes and expectations for each new model under development.
Noah Giansiracusa, an associate professor of mathematics at Bentley University in Waltham, Massachusetts, said AI models will keep improving, but the rate at which that will happen is questionable.
Related Story