AI seems like it’s everywhere — doing everything from suggesting email subject lines to powering our smart homes.
But has it reached its peak?
Ask AI leaders like Sam Altman and Elon Musk and you’re likely to hear a firm “no”. Altman, in particular, has been vocal about his belief that AI will eventually surpass human intelligence. But what if we’re already seeing signs of the opposite? What if, instead of accelerating, AI is starting to plateau?
AI isn’t evolving on its own. It doesn’t learn like a human, there’s no gut-instinct, emotion, or lived experiences behind its development. Its capabilities are tied directly to the data that we give it. And when it comes to that data, even Altman and Musk could acknowledge that we’re beginning to hit a wall.
So while AI may not have peaked yet, it might not be far off.
Scraping the bottom of the web
Most of the growth we’ve seen in AI so far has come from feeding models huge amounts of data, scraped from articles, academic journals, websites, and social media platforms. But that supply is starting to dry up.
It’s what some experts are calling “Peak AI”. OpenAI’s co-founder has even compared the issue to fossil fuels — a finite resource that’s easy to exhaust, and impossible to replenish.
And that’s where the issue lies. Without new data to train on, even the most sophisticated models will start to stagnate. And for businesses relying on AI to do more of the heavy lifting, that’s a real concern.
When AI feeds itself
As new training data becomes scarce, a new risk is emerging. What happens when AI starts learning from its own output? This closed loop —where systems are trained on recycled or AI-generated data— can lead to a steady decline in performance, a scenario that is being referred to as “model collapse.”
For businesses that rely on AI in their workflows, this poses a serious threat. Model collapse can cause tools to produce inaccurate outputs — and in some instances, become entirely unreliable.
The lesson is simple: if the quality of training data slips, so will the results. Garbage in, garbage out.
Why synthetic data can’t be a true replacement
To address the data shortage, many businesses are turning to synthetic alternatives, like AI-generated survey responses and simulated insights, designed to mimic real-world behaviours.
But depending too heavily on synthetic data comes with its own risks. Without meaningful human input, there’s a danger that AI ends up falling back into a cycle of recycled, synthetic data, nudging us further toward model collapse.
Over time, this can lead to repeated and amplified flaws or biases from older data, making each new iteration less accurate and more detached from reality. That’s a problem for any business trying to base decisions on those outputs.
While AI may sound convincingly human, it doesn’t actually think like one. It draws from patterns it has seen before, meaning that synthetic data lacks the nuance that comes from real human insight.
My advice for businesses? Used sparingly, synthetic data can help plug small gaps. But AI performs best when it’s rooted in reality.
AI has reached a turning point, not a plateau
So, has AI reached its peak? Not quite. But continued progress isn’t guaranteed. The growth we’ve seen so far has been driven by vast amounts of data, and it’s becoming clear that this momentum can’t be sustained.
What comes next is a turning point: a shift from quantity to quality. Businesses can’t rely on sheer volume of data or synthetic inputs to deliver results. Real-world insights, grounded in human experience, are what will keep AI useful and relevant.
It’s not about having more data, it’s about having better data.
