Gary Marcus strikes again! He directly reveals the core truth after Claude Code source code leak: ✅ Claude Code is the biggest advance since the LLM era
✅ But it's not pure LLM, and it's not pure deep learning
✅ Core file print.ts has 3,167 lines, packed with if-then branches + deterministic symbolic logic Anthropicrely on classical symbolic AI at the critical moment to make Agent truly reliable. This move directly validates the Neurosymbolic AI (neural-symbolic hybrid) approach that Marcus has been advocating for over 20 years! Scaling is no longer the only answer; hybrid approach is the future The full long-form article is worth reading carefully 👇 — Gary Marcus (@GaryMarcus) Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it's not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can't trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized. Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where Geoffrey Hinton said we didn't need to go): to Neurosymbolic AI. That's right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI to do an important part of the work. Claude Code isn't better because of scaling. [Translated from EN to English]
→ View original post on X — @garymarcus, 2026-04-11 22:22 UTC
Leave a Reply