Yann LeCun’s AMI Labs: the “world model” startup betting on AI beyond LLMs

In late 2025, Yann LeCun (Turing Award winner and longtime chief AI scientist at Meta) confirmed he’s launching a new Paris-based AI research company focused on what he calls Advanced Machine Intelligence—often summarized as building “world models.” (LinkedIn)

The short version: LeCun’s thesis is that today’s large language models are useful but structurally limited, because they don’t truly learn the physical world, build persistent internal representations, or plan actions the way humans (or even children) can. His new company, commonly referred to as Advanced Machine Intelligence Labs (AMI Labs), aims to push that next step. (LinkedIn)

Quick facts (what’s confirmed so far)

Company: Advanced Machine Intelligence (AMI) / AMI Labs (frequently written as “Advanced Machine Intelligence Labs”). (LinkedIn)
Base: Paris (LeCun has said the company will be based there; French press also frames it as a Paris launch). (Le Monde.fr)
Focus: “World models” — AI systems built to model and reason about the physical world, not just generate text. (Le Monde.fr)
Leadership: LeCun says he will be Executive Chairman; Alexandre (Alex) LeBrun (Nabla co-founder) is transitioning to CEO of AMI Labs. (LinkedIn)
Meta relationship: Meta is expected to be a partner (LeCun has explicitly said “partner, not an investor”). (Le Monde.fr)
Funding chatter: Reports describe early talks to raise roughly €500M at around a €3B valuation (pre-launch), citing people familiar with discussions. (Reuters)
Early partnership: AMI Labs + Nabla announced an “exclusive strategic partnership” (Nabla says it will get first access to AMI’s emerging world model tech). (Nabla)

Why this matters: LeCun is making a very specific bet

A lot of “next-gen AI” talk is vague. LeCun’s isn’t.

He’s been consistent for years that LLMs are not the endgame for machine intelligence. In his announcement about leaving Meta, he described the goal as building systems that:

  • understand the physical world

  • have persistent memory

  • can reason

  • can plan complex action sequences (LinkedIn)

That combination (world understanding + memory + planning) is basically the missing core for reliable agents in robotics, autonomy, and even high-stakes decision support.

What “world models” actually are (in plain language)

A world model is an AI system that tries to build an internal representation of how the world works so it can:

  • predict what will happen next,

  • simulate “what-if” scenarios,

  • choose actions that achieve a goal,

  • and do it with stability and memory.

Think of the difference between:

  • Autocomplete (good at continuing text),
    vs

  • A mental simulator (good at predicting consequences in an environment).

LeCun argues LLMs are great tools, but they lack a “mental model,” general intuition, and robust interaction—things that show up when you try to make systems operate in the real world. (Le Monde.fr)

How AMI’s approach connects to real research (not just a slogan)

LeCun’s “world model” direction is tied to a line of research often associated with JEPA / V-JEPA style models (joint-embedding predictive architectures), where models learn by predicting representations of missing future information rather than generating pixels or tokens.

A concrete example from the research world:

  • V-JEPA 2 (a 2025 arXiv paper) describes training on over 1 million hours of internet video, then adapting with limited robot data to enable planning tasks on real robot arms—presented as progress toward models that understand and act in the physical world. (arXiv)

Even if you don’t buy every claim, the direction is clear: learn from video / multimodal streams → build internal state → plan actions.

Why Paris, and why now?

French coverage framed LeCun’s Paris move as partially strategic: to pursue “next-generation AI” research outside Silicon Valley’s current LLM obsession, and also as a visible win for France’s AI ambitions. (Le Monde.fr)

The timing also lines up with a broader shift: major labs and startups are increasingly treating world models as the next battleground:

  • Google DeepMind publicly positions Genie 3 as a “new frontier for world models.” (Google DeepMind)

  • Fei-Fei Li’s World Labs is explicitly about spatial intelligence and interacting with 3D worlds, and has shipped early products in that direction. (TechCrunch)

  • General Intuition raised major funding to build models for environments requiring deep spatial/temporal reasoning (starting from game interactions). (General Intuition)

So AMI Labs isn’t happening in a vacuum—it’s part of a real industry pivot toward environment-level intelligence.

The Nabla partnership is an interesting signal (regulated “agentic” AI)

Most “world model” startups talk robotics or simulation. Nabla’s press release frames world models as a path to:

  • more deterministic, auditable decision-making

  • simulation-based reasoning (“what-if”)

  • robust handling of continuous medical signals

  • a more credible regulatory path for autonomous/agentic systems (Nabla)

That’s notable because healthcare is exactly where “LLM-only” systems run into walls: reliability, auditability, and repeatability.

Also, the same release documents a planned leadership change: LeBrun transitions to CEO of AMI Labs while staying involved at Nabla as Chief AI Scientist/Chairman, with Nabla’s COO continuing operations during a CEO search. (Nabla)

What AMI Labs might build first (a grounded guess)

Nobody credible is claiming AMI will ship “AGI” next quarter. Based on what LeCun has publicly emphasized, the most likely early outputs are:

  1. Core world-model research (video + multimodal pretraining, latent prediction, memory)

  2. Planning toolchains that sit on top of those models (simulation + action selection)

  3. Partnership pilots (like Nabla) where “world model” components augment LLMs in constrained domains (LinkedIn)

In other words: expect research demos first, then vertical integrations where determinism and planning matter.

Risks and hard problems (why this is not guaranteed to work)

World models are a compelling idea, but they have real challenges:

  • Evaluation is harder than for text models (you need environment-based metrics, not just benchmarks).

  • Data is messy (video is huge; interaction data is expensive; aligning both is non-trivial).

  • Planning at scale can get computationally intense.

  • Reality gap (sim ≠ real world), especially if you want robotics or autonomy.

LeCun himself has repeatedly implied this is a longer-term paradigm shift—not an incremental “add one more layer to a transformer” story. (Le Monde.fr)

What this means for product teams (and why it matters for web + SaaS)

Even if you never touch a robot, “world model thinking” influences how software gets built:

  • Agents will need state and memory (not just chat logs).

  • Workflows will rely on simulation/preview (“what will happen if I do X?”).

  • Auditability becomes a product requirement (especially in finance/health/legal).

  • Multimodal inputs (screens, video, sensor data, files) become standard.

If your product roadmap includes AI features, the takeaway is simple: the next wave won’t be “better chat.” It’ll be systems that can reliably act.

Sorca Marian

Founder, CEO & CTO of Self-Manager.net & abZGlobal.net | Senior Software Engineer

https://self-manager.net/
Previous
Previous

How Many Data Centers Are There in the World (and How Much “Power” + Storage Do They Add Up To)?

Next
Next

Top 10 Biggest European Software Companies (Public + Private) by Value (January 2026)