AGI: Are We There Yet? (2025 Edition)

ai agi tech

Every year, the “AGI timeline” debates intensify. Let’s take stock of where we actually are at the end of 2025.

Defining Terms

What is AGI?

No consensus exists, but common definitions:

DefinitionDescriptionAchieved?
Human-level at all tasksMatches any human at any cognitive taskNo
Economic valueCan do any job remotelyApproaching?
General reasoningTransfers learning across domainsPartially
Self-improvementCan improve its own capabilitiesLimited

The goalposts shift with each advance.

OpenAI’s Levels

OpenAI proposed a framework:

LevelDescriptionStatus 2025
L1: ChatbotsConversational AI✅ Achieved
L2: ReasonersHuman-level problem solving⚠️ Emerging
L3: AgentsCan take actions⚠️ Early
L4: InnovatorsAid in invention❌ Not yet
L5: OrganizationsCan run companies❌ Not yet

We’re firmly in L1, approaching L2, experimenting with L3.

2025 Progress

Reasoning Models

o1, R1, and similar models think before answering:

Agents

Real-world task completion:

Multimodal Integration

vision + audio + text in one model:

What’s Still Missing

Robust Reasoning

Human: "If I put a cup in the microwave for 30 seconds, 
        then transfer the water to a bowl that was in 
        the freezer overnight, what happens?"

AI: [Often gets physical reasoning wrong]
    [Common sense gaps evident]
    [Edge cases fail]

Models are “stochastic parrots” with better statistics, not true reasoners.

Persistent Learning

Session 1: "My name is Alex, I work on climate models."
Session 2: [No memory of previous session]

Models don’t learn from interactions (by design, for safety).

Planning Over Long Horizons

Task: "Build a successful startup"

AI can:
- Generate a business plan
- Suggest marketing strategies
- Draft legal documents

AI cannot:
- Adapt to market feedback
- Handle novel challenges
- Maintain coherent strategy over months

Physical World Understanding

Question: "Can I fit this couch through that doorway?"

AI: [Makes assumptions]
    [Lacks spatial reasoning]
    [Can't interact with physical world]

Embodied intelligence remains limited.

Expert Opinions

Optimists

ExpertTimeline
Sam Altman (OpenAI)“A few years”
Dario Amodei (Anthropic)“2026-2027 for significant milestone”
Demis Hassabis (DeepMind)“Within a decade”

Skeptics

ExpertView
Yann LeCun (Meta)“Current approaches won’t reach AGI”
Gary Marcus”Pattern matching, not reasoning”
Many academics”We don’t even have the right definition”

Median Estimate

Surveys of ML researchers:

The Benchmark Problem

Benchmarks Saturate

Benchmark2020 SOTA2025 SOTA
MMLU70%95%+
HumanEval30%90%+
MATH10%90%+

Are tests too easy, or is AI too capable?

New Harder Tests

ARC Prize, FrontierMath, Novel benchmarks:

What Would AGI Look Like?

Signs We’re Close

Signs We’re Far

Implications for Developers

If AGI is Near (Optimist View)

If AGI is Far (Realist View)

Either Way

# Good advice regardless of timeline

1. Stay adaptable
2. Learn current AI tools
3. Create judgment, not just output
4. Build human skills AI lacks
   - Leadership
   - Ethics
   - Physical creation
   - Empathy

My Take

We’re not close to AGI by strict definitions. We are close to AI that transforms significant parts of the economy.

What’s real:

What’s hype:

Final Thoughts

The AGI question depends on your definition, your timeline, and what you include as “intelligence.”

More useful question: “What can AI do for me today, and how is that changing?”

Answer that question. The AGI debate can wait.


The singularity is always five years away.

All posts