https://www.youtube.com/watch?v=21EYKqUsPfg
https://www.youtube.com/watch?v=7u-DXVADyhc
http://yann.lecun.com/
https://www.wsj.com/tech/ai/yann-lecun-ai-meta-0058b13c
Yan LeCun been saying this many times since ChatGPT, LLMs can’t get us to AGI.
It’s not just about him leaving Meta, or Mark Zuckerberg getting a 28 year old to oversee all AI development. He just wants to push ahead with exciting research to drive the industry forward. Whilst LLM is already powerful enough to fascinate most non-CompSci users, true AI scientists like LeCun are not confused by how far away we are from genuine self learning intelligence. And he plans to keep his research open and publish them for peer review, which is no longer compatible with Meta’s strategy.
Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson. Also thinks LLMs are a dead end.
LLMs: Sophisticated Text Predictors, Not Reasoning Engines
Large language models are fundamentally probabilistic text generators—systems that predict the next token based on statistical patterns learned from training data. This core mechanism is their greatest strength and their critical limitation.
When you prompt an LLM, it doesn’t reason like humans do. Instead, it analyzes context through attention mechanisms, calculates probability distributions across its vocabulary, and selects the next token. This process repeats iteratively, token by token. It’s pattern matching at extraordinary scale, not thinking.
Yet here’s the paradox: despite being “just” next-token predictors, LLMs exhibit emergent abilities like chain-of-thought reasoning and multi-step problem-solving. They succeed because language itself encodes human knowledge and reasoning patterns. This creates an illusion of understanding that can be dangerously deceptive.
The critical gap emerges when problems require explicit reasoning, novel combinations, or knowledge sparse in training data. LLMs struggle with lookahead planning, formal mathematics, metacognitive awareness, and tasks where they must recognize their own limitations. They can hallucinate confidently because they’re optimizing for probable text, not truth.
Why this matters: People anthropomorphize LLMs because their outputs feel intelligent. But understanding them as probabilistic systems clarifies what they actually excel at—synthesizing patterns, retrieving knowledge, extrapolating from examples—versus what they cannot do: genuine reasoning or reliable computation. If that is not clear enough, don’t use it for deriving logical or numerical answers, use it for summarising or rewording textual information.

Leave a Comment