
The Great AI Mutation: How Teaching Machines to Think Triggered a Second Intelligence Boom.
Posted in :
By mid-2024, AI development faced a plateau—larger models like ChatGPT and Gemini failed to improve despite massive investments. The breakthrough? Teaching AIs to reason, not just predict. Enter reasoning models like o1 and DeepSeek R1. These “thinking” machines marked the dawn of a second AI boom, democratizing superintelligent tools and reshaping the future of AGI.
In the middle of 2024, the race for artificial general intelligence (AGI) hit a standstill. Despite billions invested and countless terabytes of data consumed, the models we hailed as revolutionary—ChatGPT, Gemini, Claude—stopped improving. The formula was broken. The logic had always been “bigger is better”—more data, more parameters, smarter answers.
But growth stalled. There was no more internet to read. Every book, every tweet, every scrap of public knowledge had been absorbed. AI models were starving, their “training wood” nearly exhausted. A reckoning had arrived.
Yet, from this stagnation emerged a radical shift. A pivot from raw scale to cognitive depth. The answer wasn’t more data—it was better thinking.
The Turning Point: From Prediction to Reasoning
The AI community realized it had missed something fundamental: humans don’t just react—we reflect. Inspired by Nobel laureate Daniel Kahneman’s “Thinking, Fast and Slow,” AI researcher Noam Brown reimagined machine learning through the lens of two cognitive systems:
- System 1: Fast, intuitive, automatic.
- System 2: Slow, deliberate, analytical.
Traditional language models like GPT operated purely in System 1—generating plausible text from statistical patterns. But they couldn’t reflect, reconsider, or reason.
Brown’s solution? Slow the AI down. Introduce deliberate pauses. In an experiment with his poker-playing bot Libratus, simply forcing it to “think” for 20 seconds before responding produced results equivalent to 100,000x more training data.
The Vegas Breakthrough and Birth of o1
In 2017, Libratus re-entered the poker scene and beat four of the world’s best players, armed not with more training, but with reasoning time. Fast forward to late 2024—Brown brought this insight to OpenAI. The result was o1, the first of OpenAI’s reasoning language models (RLMs). For the first time, ChatGPT wasn’t just a response engine—it was a thinker.
It could solve complex puzzles step by step. Explain its logic. Make and correct its own mistakes. Think aloud. This was a radical leap.
The Rise of DeepSeek: Open Source Superintelligence?
But the real shock came from the East.
In early 2025, an obscure Chinese startup, DeepSeek, released R1—a powerful, open-source RLM that rivaled OpenAI’s o1 and Google’s Gemini 2.5. It wasn’t just smart—it was free, unrestricted, and potentially trained using a controversial method: model distillation, possibly from ChatGPT 4.0.
Despite ethical concerns, DeepSeek R1 democratized AI reasoning. Anyone could integrate it into apps, tools, and services. Within weeks, Chinese tech giants and global startups had adopted R1. For the first time, intelligence rivaling U.S. tech titans wasn’t locked behind paywalls.
Synthetic Data and the Hunger for Knowledge
The second bottleneck facing AI was data scarcity. Once the internet had been consumed, what could fuel the next evolution?
The answer: synthetic data. AIs began generating data for themselves. Simulations, imagined patient profiles, infinite poker hands, self-written code—these artificial datasets allowed machines to learn autonomously.
In mathematics and programming, AI no longer needed humans to improve. They had begun to teach themselves.
AI Learns to Reflect: The Reasoning Revolution
What exactly makes a reasoning model so different?
- It pauses.
- It plans.
- It questions.
- It corrects.
- It debates with itself.
Models like o3 and DeepSeek R1 demonstrate internal monologue—like watching a student solve a math problem on the blackboard. This isn’t parroting. This is cognition.
Even complex human tasks, like organizing a wedding seating chart—complete with rival family members, photo angles, and noise levels—are now handled more elegantly by RLMs than by earlier models.
From Innovation to Disruption: The Economic Impact
This leap in reasoning triggered a seismic shift in the AI economy.
- Training Efficiency: New models require less data to learn more.
- Inference Complexity: They’re slower, but vastly smarter—great for chipmakers like NVIDIA.
- Cost: Services once priced in the tens of thousands are now a few hundred euros per month.
- Accessibility: Anyone can now build apps powered by world-class AI.
Suddenly, AI wasn’t a luxury for tech giants. It was a toolkit for creators, developers, and small businesses worldwide.
Are We There Yet? The Road to AGI
Are o3 and DeepSeek R1 true AGI? Not quite. But they’re close.
Researchers introduced the ultimate exam: the Human Last Exam, 3,000 questions across 100 disciplines—so difficult no human has passed it. Some current models come eerily close. Projections suggest that by late 2025, at least one model may pass.
OpenAI, Google, and others are already recruiting for a “post-superintelligence” world. It’s no longer a matter of if—but when.
The Risk of Complacency: Thinking for Us
But with great intelligence comes a subtle threat. As models get better at thinking, humans may get worse at it.
- GPS made us forget how to navigate.
- Autocorrect made us worse at spelling.
- AI that thinks might make us forget how to reason.
These tools are powerful allies—but they’re not replacements for human reflection, creativity, and decision-making. If we become passive consumers of AI-generated answers, we risk becoming passengers in our own lives.
A New Era Begins
From poker tables in Vegas to open-source labs in China, the reasoning AI revolution is real—and it’s here. These models pause. They reflect. They outperform. And they challenge us to think harder about our role in the world they are helping to reshape.
Whether this ends in a utopia of augmented intelligence or a dystopia of outsourced thought will depend on how we use this technology.
The next chapter belongs not just to the machines—but to us.
ArtificialIntelligence, #ReasoningAI, #DeepSeekR1, #ChatGPT, #SamAltman, #DanielKahneman, #Libratus, #AGI, #SyntheticData, #AIRevolution