- Forward Future Daily
- Posts
- 👾 The Different Concepts of AGI: OpenAI, Anthropic and Google in Comparison and When AGI Is Achieved
👾 The Different Concepts of AGI: OpenAI, Anthropic and Google in Comparison and When AGI Is Achieved
How top AI labs differ on defining AGI, their goals, and why timelines range from 2026 to decades away.

Scott Pelley: The end of disease?
Demis Hassabis: I think that's within reach. Maybe within the next decade or so, I don't see why not.
Few concepts in technology today inspire as much fascination—and confusion—as “Artificial General Intelligence” (AGI). DeepMind CEO Demis Hassabis recently told U.S. television audiences that “within the next five to ten years,” we may see systems capable not just of solving scientific problems, but of generating entirely new hypotheses. Meanwhile, OpenAI CEO Sam Altman has claimed that “superintelligence is realistic in a few thousand days.” In contrast, Meta’s AI pioneer Yann LeCun remains skeptical, insisting that AGI is “not around the corner,” but rather a challenge that may take years—if not decades—to achieve.
What is behind these contradictory statements? In this article, I compare the AGI definitions of the most important laboratories, show how they measure their goals - and why their timelines are so far apart.
OpenAI: AGI as an Economic Threshold
OpenAI officially defines AGI as “highly autonomous systems that outperform humans in most economically valuable tasks”. Behind the scenes, there is even a hard-hitting metric: in the agreement concluded with Microsoft in 2023, AGI is only considered to have been achieved when a model generates 100 billion dollars in potential profits.
The two companies signed an agreement in 2023 that defined AGI as a system that can generate $100 billion in profits, The Information reported on Thursday, citing documents it had obtained.
Against this backdrop, Altman's blog sentence sounds almost sober: “It is possible that we will have superintelligence in a few thousand days - it may take longer, but we will get there.” For OpenAI, AGI is now primarily a productivity milestone that seems within reach based on today's computing power. Nevertheless, OpenAI presumably continues to work with a level table that indicates different AGI stages.
Google DeepMind: AGI as Scientific Creativity
Demis Hassabis envisions something broader: machines that “don’t just solve problems, but invent them.” AlphaFold—the AI system that predicts protein folding—offers a glimpse of this future. For DeepMind, success isn’t measured in balance sheets, but in breakthroughs once driven solely by human curiosity. Accordingly, the company remains cautiously optimistic about its timeline: five to ten years.
Around two years ago, Google DeepMind also published an overview of the various AGI stages that need to be reached.
Anthropic: “Powerful AI” Instead of a Marketing Label
Anthropic CEO Dario Amodei avoids the word AGI. In a CNBC interview, he calls it “a marketing term” and prefers to sketch a picture: “a country of geniuses in the data center” that could become a reality in two to three years. In his essay, he talks about systems that are “smarter than a Nobel Prize winner in most subjects” - target year: 2026 . However, safety is more important than speed; Anthropic's “Constitutional AI” is intended to establish rules before scaling performance.
What powerful AI (I dislike the term AGI) will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all.
I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that.
I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.
Meta & The Skeptics: AGI Needs a Paradigm Shift
LeCun believes that the current LLM approach is a dead end: the models have “no permanent memory, no understanding of the world, no ability to plan” - which is why the “truly intelligent” will only come after a change in architecture, and not for several years at the earliest. His yardstick is based less on benchmarks than on cognitive abilities - a definition that automatically leads to longer timelines.
Why The Timelines Are Drifting Apart
The examples show:
The narrower the definition (OpenAI: turnover, Amodei: clear performance criteria), the shorter the time horizon.
Those who place creativity or security at the center (DeepMind, Anthropic) accept longer development loops.
External players such as Microsoft link AGI to contractual profit targets, Meta to new architecture concepts - both shift the goalposts again.
At the end of the main section, an interim conclusion emerges: AGI is less a technical fixed point than a moving narrative whose location is determined by the actors themselves.
Conclusion
The starting question was simple: How do leading labs define AGI—and when do they expect it to arrive? The analysis reveals a clear takeaway: there is no shared definition of AGI.
OpenAI predicts a breakthrough before 2030 (based on a strictly economic criterion).
DeepMind expects early 2030s with a scientific focus.
Anthropic believes in “Powerful AI” as early as 2026, but refuses to stick the label AGI on it.
Meta sees true “general” intelligence only after a new paradigm - years to decades away.
The question “When will AGI arrive?” can therefore only be answered if we first define what we really want: more productivity, more scientific creativity, maximum controllable systems - or a completely new understanding of intelligence. As soon as society, politics and research agree on common benchmarks, the range of forecasts will shrink. Until then, AGI will remain a hub of expectations - and a projection screen for very different visions of the future.
—
Ready for more content from Kim Isenberg? Subscribe to FF Daily for free!
![]() | Kim IsenbergKim studied sociology and law at a university in Germany and has been impressed by technology in general for many years. Since the breakthrough of OpenAI's ChatGPT, Kim has been trying to scientifically examine the influence of artificial intelligence on our society. |
Reply