Notes on LLM anti-intelligence
Worth repeating: Do not confuse retrieval with reasoning. Do not confuse rote learning with understanding. Do not confuse accumulated knowledge with intelligence.
https://x.com/ylecun/status/1845193021584728365
–
the illusion LLMs are intelligent comes from their massive scale. it is hard to visualize, but these things memorized the WHOLE internet. everything you’ve ever asked it, either has been solved before, or is a simple combination of existing solutions. but that’s still an illusion.
when LLMs face a problem that require a NEW solution - one of original shape, one it has never seen before - they fail. it is that simple. that’s what my example shows. i took as simple problem - inverting a binary tree - and added a few constraints to make sure the solution is unique and not in the dataset, forcing it to actually SOLVE it itself. and, surprise - it can’t!
and I must stress this isn’t about THIS problem. but about all. LLMs can’t solve ANY problem, at all. it can only spit a memorized solution. if nobody posts this solution online, not even GPT-6, opus-5, or o3, will be able to solve this very prompt. I’m betting on that.
the inability to create new solutions imply LLMs won’t invent new science. yes, they will completely change the world as we know it. they’ll have a higher impact than computers and the internet. but, unless a new kind AI emerges, we’re still on our own when it comes to curing cancer or making superconductors.