AGI: Are We There Yet? (2025 Edition)
Every year, the “AGI timeline” debates intensify. Let’s take stock of where we actually are at the end of 2025.
Defining Terms
What is AGI?
No consensus exists, but common definitions:
| Definition | Description | Achieved? |
|---|---|---|
| Human-level at all tasks | Matches any human at any cognitive task | No |
| Economic value | Can do any job remotely | Approaching? |
| General reasoning | Transfers learning across domains | Partially |
| Self-improvement | Can improve its own capabilities | Limited |
The goalposts shift with each advance.
OpenAI’s Levels
OpenAI proposed a framework:
| Level | Description | Status 2025 |
|---|---|---|
| L1: Chatbots | Conversational AI | ✅ Achieved |
| L2: Reasoners | Human-level problem solving | ⚠️ Emerging |
| L3: Agents | Can take actions | ⚠️ Early |
| L4: Innovators | Aid in invention | ❌ Not yet |
| L5: Organizations | Can run companies | ❌ Not yet |
We’re firmly in L1, approaching L2, experimenting with L3.
2025 Progress
Reasoning Models
o1, R1, and similar models think before answering:
- Multi-step mathematical proofs
- Code generation with planning
- Scientific problem-solving
- Still fail on novel challenges
Agents
Real-world task completion:
- Browse web, complete forms
- Multi-step research
- Code changes across files
- Still need supervision
Multimodal Integration
vision + audio + text in one model:
- Real-time video understanding
- Native speech generation
- Physical world comprehension improving
- Not yet embodied reliably
What’s Still Missing
Robust Reasoning
Human: "If I put a cup in the microwave for 30 seconds,
then transfer the water to a bowl that was in
the freezer overnight, what happens?"
AI: [Often gets physical reasoning wrong]
[Common sense gaps evident]
[Edge cases fail]
Models are “stochastic parrots” with better statistics, not true reasoners.
Persistent Learning
Session 1: "My name is Alex, I work on climate models."
Session 2: [No memory of previous session]
Models don’t learn from interactions (by design, for safety).
Planning Over Long Horizons
Task: "Build a successful startup"
AI can:
- Generate a business plan
- Suggest marketing strategies
- Draft legal documents
AI cannot:
- Adapt to market feedback
- Handle novel challenges
- Maintain coherent strategy over months
Physical World Understanding
Question: "Can I fit this couch through that doorway?"
AI: [Makes assumptions]
[Lacks spatial reasoning]
[Can't interact with physical world]
Embodied intelligence remains limited.
Expert Opinions
Optimists
| Expert | Timeline |
|---|---|
| Sam Altman (OpenAI) | “A few years” |
| Dario Amodei (Anthropic) | “2026-2027 for significant milestone” |
| Demis Hassabis (DeepMind) | “Within a decade” |
Skeptics
| Expert | View |
|---|---|
| Yann LeCun (Meta) | “Current approaches won’t reach AGI” |
| Gary Marcus | ”Pattern matching, not reasoning” |
| Many academics | ”We don’t even have the right definition” |
Median Estimate
Surveys of ML researchers:
- 50% probability of “high-level machine intelligence” by 2060
- Large uncertainty bands
- Definition-dependent
The Benchmark Problem
Benchmarks Saturate
| Benchmark | 2020 SOTA | 2025 SOTA |
|---|---|---|
| MMLU | 70% | 95%+ |
| HumanEval | 30% | 90%+ |
| MATH | 10% | 90%+ |
Are tests too easy, or is AI too capable?
New Harder Tests
ARC Prize, FrontierMath, Novel benchmarks:
- Designed to require genuine reasoning
- Models still struggle
- But performance improves monthly
What Would AGI Look Like?
Signs We’re Close
- Scientific discoveries attributed to AI
- AI improving AI effectively
- Genuine transfer learning demonstrated
- Novel problem-solving without prompting
- Self-correction without human feedback
Signs We’re Far
- Basic reasoning failures persist
- Common sense gaps remain
- No autonomous scientific progress
- Embodied intelligence limited
- Safety/alignment unsolved
Implications for Developers
If AGI is Near (Optimist View)
- Learn to direct AI rather than code
- Focus on judgment and oversight
- Build AI management skills
- Prepare for rapid change
If AGI is Far (Realist View)
- Current skills remain valuable
- AI as tool, not replacement
- Focus on human-AI collaboration
- Solve real problems with current tech
Either Way
# Good advice regardless of timeline
1. Stay adaptable
2. Learn current AI tools
3. Create judgment, not just output
4. Build human skills AI lacks
- Leadership
- Ethics
- Physical creation
- Empathy
My Take
We’re not close to AGI by strict definitions. We are close to AI that transforms significant parts of the economy.
What’s real:
- AI that exceeds human performance on narrow tasks
- Useful agents for defined scopes
- Automation of cognitive work at scale
What’s hype:
- Superintelligence imminently
- Human obsolescence
- “One more scaling jump gets us there”
Final Thoughts
The AGI question depends on your definition, your timeline, and what you include as “intelligence.”
More useful question: “What can AI do for me today, and how is that changing?”
Answer that question. The AGI debate can wait.
The singularity is always five years away.