he isnt talking building hes talking investing
AGI
-
From Singularity to Multiplicity: Explosion of Intelligence Varieties
By
–
We’re not headed to the singularity, but to the multiplicity: an explosion without precedent of varieties of intelligence.
-
Claude Code Vindication: Neurosymbolic AI Emerges as Next Paradigm
By
–
Couldn’t agree more! We’re moving towards Nuerosymbolic AI – there’s masses just haven’t realized it yet, in the sense of understanding what the literature proposed and why the latest developments are starkly aligned Gary Marcus (@GaryMarcus) Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.* Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work. Claude Code isn’t better because of scaling. It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close. What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.* Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too. Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. The paradigm has changed. — *Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day. — https://nitter.net/GaryMarcus/status/2042987819333738929#m
→ View original post on X — @garymarcus, 2026-04-11 19:48 UTC
-
Gary Marcus vindicated: Claude Code proves neurosymbolic AI superiority
By
–
Yep … @GaryMarcus has been right for 25 years; some AI Godfathers, not so much at all! #SaturdayAISurvey @AnthropicAI Gary Marcus (@GaryMarcus) Claude Code is not AGI, but it is the single biggest advance in AI since the LLM. But the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. And that changes everything. The source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts. print.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. But Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic. Instead, the way Anthropic built that kernel is straight out of classical symbolic AI. For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.* Putting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. That’s right, the biggest advance since the LLM was neurosymbolic. AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work. Claude Code isn’t better because of scaling. It’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. It’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close. What we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.* Read that article if you want to know what else we need to do next. The first part has already come to pass. In time, other three will, too. Meanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. The paradigm has changed. — *Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day. — https://nitter.net/GaryMarcus/status/2042987819333738929#m
→ View original post on X — @garymarcus, 2026-04-11 19:46 UTC
-
Pure LLMs Won’t Lead to AGI: Standing Ground on AI
By
–
if pure LLMs lead to AGI, i will have been wrong. i am standing my ground, however.
-
Agent Swarm: Multi-Agent System Building Entire Businesses Automatically
By
–
🚨 ANNOUNCING AGENT SWARM – A MULTI-AGENT SYSTEM THAT CAN BUILD AN ENTIRE BUSINESS
— Bindu Reddy (@bindureddy) 11 avril 2026
A Master Agent spawns multiple worker agents each responsible for a task
The workers agents use 12+ LLMs to do various tasks including research, design, coding, testing and automation
The… pic.twitter.com/wA34ibBzSS🚨 ANNOUNCING AGENT SWARM – A MULTI-AGENT SYSTEM THAT CAN BUILD AN ENTIRE BUSINESS A Master Agent spawns multiple worker agents each responsible for a task The workers agents use 12+ LLMs to do various tasks including research, design, coding, testing and automation The Master Agent monitors and delegates tasks to the worker agents Agent Swarms will evolve to work like human teams and will have eventually have goals instead of stand-alone tasks Agent Swarms Is A Early Manifestation of AGI
-
AI Antichrist Coming: Effective Altruism as Salvation
By
–
Repent and convert to effective altruism, for the AI Antichrist is coming.
-
Diminishing Returns Block AI Singularity Achievement
By
–
Key point: Diminishing returns from self-improvement => No singularity
-
Hinton’s Early AI Predictions Now Mainstream Consensus
By
–
now you say sure; once thousands of people, from Hinton on down, derided me for saying what you think is obvious.
-
Agent Run Loop Source Code Available in Substack
By
–
if you have full source i would like to see it; this claims it contains the agent run loop etc; source in my substack