The Day After AGI: What Demis Hassabis and Dario Amodei said at The World Economic Forum

Davos has a way of turning sci-fi titles into operational questions.

At the World Economic Forum, a session called “The Day After AGI” put two of the most influential AI builders on the same stage:

  • Demis Hassabis — co-founder and CEO of Google DeepMind (and the “AI-for-science” builder behind AlphaGo → AlphaFold).
  • Dario Amodei — co-founder and CEO of Anthropic (one of the key “safety-first” labs pushing frontier model capability while building formal guardrails).
  • Moderated by Zanny Minton Beddoes (The Economist).

The interesting part wasn’t the buzzword “AGI.” It was the framing: what happens after AI stops being a tool that answers, and becomes a system that plans, executes, improves itself, and operates across the economy at scale?

That’s the “day after” problem.


1) First, who are these two people in AI history?

Demis Hassabis: from breakthrough demos to “AI for discovery”

Hassabis is one of the rare leaders who’s consistently pushed the idea that AI’s real endgame is not chatbots—it’s general problem-solving, applied to science and medicine.

A short timeline that matters:

  • DeepMind (2010): founded around the ambition of building general intelligence systems.
  • AlphaGo (2016): a proof point for deep reinforcement learning and complex planning under uncertainty.
  • AlphaFold (2020+): AI-driven protein structure prediction that moved from “research milestone” to “global scientific infrastructure.”
  • AI-for-science worldview: Hassabis tends to describe “AGI” as a system that can generate new hypotheses and accelerate discovery—not just automate office work.

If you want one sentence: Hassabis represents the “build general capability, then point it at the hardest problems” tradition.

Dario Amodei: frontier capability + safety as a core engineering constraint

Amodei represents a different arc: researchers who helped push large-scale model training forward and then centered the question, “how do we keep this safe as capability scales?”

A short timeline that matters:

  • Former senior research leadership in the modern LLM era (pre-Anthropic).
  • Anthropic (founded 2021): structured explicitly around building powerful models with safety methods treated as first-class engineering.
  • Constitutional AI: a training approach where models learn to critique and revise outputs against an explicit set of principles—trying to make “alignment” more systematic than ad-hoc filtering.

If you want one sentence: Amodei represents “capability is accelerating; safety and governance must scale at the same speed.”


2) What WEF meant by “The Day After AGI”

WEF’s framing wasn’t “will AGI happen?” It was closer to:

  • A credible pathway is increasingly visible because scaling + multimodal systems + agentic models are converging.
  • But the “day after” depends on what we do about governance, safety, labor disruption, and security while we’re still early enough to steer outcomes.

That’s why this panel matters: it’s not a philosophical debate. It’s a strategic planning meeting for the global economy—just held in public.


3) The core tension: capability is rising fast, but the last mile is messy

Amodei’s stance (as it came across)

Amodei’s message was essentially:

  • We’re approaching a zone of “incredible capabilities.”
  • The next few years are the critical window where rules, evaluations, and deployment constraints either get built—or we lock in a chaotic race dynamic.

He tends to talk like someone staring at a compounding curve: once systems can produce useful work across domains, the slope changes socially (jobs, education, advantage) and geopolitically (security, misuse).

Hassabis’s stance (as it came across)

Hassabis sounded more like:

  • The curve is real, but “AGI” isn’t a single trick.
  • Certain tasks are moving fast (especially where answers are verifiable: code, math).
  • But to get to robust, general intelligence you still need “missing ingredients” — the kind that show up when systems must operate in the open world, plan reliably, and generate genuinely novel insight.

In other words: capability acceleration is undeniable, but full generality is not just bigger models.


4) “Missing ingredients” — what that phrase really implies

One of the most useful ideas from this Davos discussion is the notion that we can be simultaneously:

  • very close to massive economic impact
  • and still missing pieces for the strongest definition of AGI

Here’s the practical interpretation of “missing ingredients,” translated into engineering terms:

A) Planning and long-horizon reliability

Getting “good answers” is not the same as:

  • setting a multi-step goal,
  • creating a plan,
  • executing safely across tools and systems,
  • recovering from failure,
  • and doing it repeatedly with high confidence.

That’s what real-world deployment requires.

B) Strong world models (not just text prediction)

Many breakthroughs come when systems can represent:

  • how the physical world behaves,
  • how complex systems interact,
  • what causes what,
  • and how to test hypotheses.

That matters for “AI for science,” robotics, and anything involving uncertainty.

C) Continual learning without catastrophic failure

The day-after world is not a static benchmark. Models need to adapt safely without:

  • forgetting,
  • drifting,
  • or getting exploited.

So the “missing ingredients” aren’t academic. They define whether AGI becomes:

  • a reliable infrastructure layer, or
  • a fragile power tool.

5) Agents, not chat: the real “day after” product shift

If you run a tech business, the most actionable part of the whole AGI debate is this:

The next platform shift is from “AI that answers” to “AI that does.”

Agentic systems mean:

  • AI can operate software,
  • coordinate workflows,
  • run projects,
  • write and ship code,
  • handle customer operations,
  • and supervise other AIs.

This is where both leaders implicitly converge: even if we disagree on the strict AGI definition, the agent era is already arriving—and it changes competition.

In the day-after world:

  • Advantage comes from process + data + control, not from “prompt skill.”
  • The best teams design systems where AI produces drafts, plans, experiments, and code—while humans make the final calls and own accountability.

6) Jobs: the near-term disruption is junior work

A striking theme from Davos coverage around this panel is that both leaders pointed at the same immediate pressure point:

entry-level knowledge work.

The logic is simple:

  • junior work often involves “first drafts,” “first passes,” and “support tasks”
  • those are exactly the tasks AI is becoming good at
  • so organizations may hire fewer juniors—not because work disappears, but because the leverage per senior increases

Hassabis’s practical advice (as quoted in WEF coverage) was blunt: if he were addressing undergrads now, he’d tell them to become “unbelievably proficient” with these tools.

So the day-after response isn’t panic. It’s adaptation:

  • education shifts from memorization → judgment,
  • careers shift from “do the first pass” → “own the outcome.”

7) Governance: the window is now, not later

A key takeaway from Amodei’s side of the discussion (as reported through WEF’s Davos highlights) is that regulation and governance choices made in the next few years will shape the long-run outcome.

Translated into concrete needs, this implies:

  • standardized model evaluations (capability + misuse risk),
  • safety testing and red-teaming as deployment gates,
  • incident reporting norms,
  • secure deployment patterns for high-risk capabilities,
  • and some form of cross-border coordination (because AI doesn’t respect borders).

The “day after” problem is not just building smarter systems. It’s building institutions and technical controls that can handle smarter systems.


8) What founders and tech teams should do in 2026 (non-hype checklist)

1) Treat AI literacy as a baseline skill

Not “prompting.” Real literacy:

  • what models do well,
  • where they fail,
  • how to verify outputs,
  • and how to build safe workflows.

2) Start building agent-ready processes

You don’t need “AGI” to benefit from agents. Pick 2–3 workflows where your company already has structure:

  • support triage,
  • sales ops,
  • internal documentation,
  • product QA,
  • analytics summaries,
  • code maintenance.

Then build:

  • clear inputs,
  • clear outputs,
  • human sign-off,
  • logging and audit.

3) Invest in your internal “system of record”

Agents amplify whatever you feed them. If your data is messy, your output is messier—just faster.

4) Update hiring: less junior volume, more leverage design

Plan for:

  • fewer pure “first-pass” roles
  • more roles that combine domain judgment + AI operation + verification

5) Build governance like you build security

Treat safety as engineering:

  • access control,
  • monitoring,
  • evals,
  • and explicit policy boundaries.

Because in the day-after world, “move fast” without controls becomes an existential risk—especially for companies integrating high-capability systems into real operations.


Closing thought

The real message of “The Day After AGI” wasn’t a timeline.

It was a warning about tempo:

  • capability is accelerating,
  • institutions adapt slowly,
  • and the gap between the two is where disruption and risk live.

Hassabis represents the upside: AI that accelerates science and discovery. Amodei represents the constraint: if we don’t scale governance with capability, the transition gets ugly.

The day after AGI isn’t a moment. It’s a new operating environment—and the teams building for it now will define the winners in the next decade.

Sorca Marian

Founder, CEO & CTO of Self-Manager.net & abZGlobal.net | Senior Software Engineer

https://self-manager.net/
Next
Next

EU Inc: Ursula von der Leyen’s “28th Regime” Could Finally Make Europe Feel Like One Startup Market