Symptoms of AGI derangement syndrome:
– Mistaking incremental progress for AI breakthroughs
– Believing AI will instantly solve all problems
– Fearing a jobs apocalypse is imminent
– Obsessing about humanity's extinction
What else?
AGI
-
AGI Derangement Syndrome: Common Cognitive Biases About AI
By
–
-
Does Uber Intelligence Come Before Superintelligence?
By
–
does uberintelligence come after or before superintelligence
-
Formal Game Theory vs Psychological Common Knowledge in AI
By
–
That's a bit too harsh. Here's ChatGPT's summary: Probably not a misunderstanding in the strict sense. It looks more like Pinker is using a psychological, operationalized version of common knowledge rather than insisting on the fully formal game-theoretic one. In the formal
-
SkillClaw: Collective Skill Learning Through Agentic Evolution
By
–
"SkillClaw: Let Skills Evolve Collectively with Agentic Evolver" As most AI agents still relearn the same targets from scratch, SkillClaw turns it into shared learning, where it collects agent trajectories across users, groups them by skill, and uses an agentic evolver to spot
-
Important Procurement Deadlines Complete Guide
By
–
All of the important deadlines for Procurement #0. https://
montrealai.github.io/discoveryprime
-first-procurement-complete-guide-v10.html
… #AGIALPHA #AGIJobs #AGIFirst -
Hassabis vs LeCun: Major AI Researchers Clash on LLM Future
By
–
The CEO of Google DeepMind just went on record saying he disagrees with one of the most respected AI researchers in the world.
— Milk Road AI (@MilkRoadAI) 12 avril 2026
Demis Hassabis, the man behind AlphaFold, AlphaGo, and Google's entire AI operation publicly pushed back against Yann LeCun's claim that large language… pic.twitter.com/qmrLXNEqXUThe CEO of Google DeepMind just went on record saying he disagrees with one of the most respected AI researchers in the world. Demis Hassabis, the man behind AlphaFold, AlphaGo, and Google's entire AI operation publicly pushed back against Yann LeCun's claim that large language models are a dead end for artificial intelligence. LeCun, who left Meta earlier this year to start his own AI lab, has been saying for years that LLMs cannot reason, cannot plan, and will never get us to human-level intelligence. Hassabis disagrees, and he said so directly. His position is that scaling laws are still working, foundation models are still getting more capable, and whatever AGI ends up looking like, LLMs will be a central part of it, not something that gets replaced. He does say there is roughly a 50/50 chance that one or two additional breakthroughs will be needed beyond scaling alone, things like better memory, long-term planning, and world models. But the core disagreement with LeCun is clear, Hassabis believes the current architecture is sound and the current path leads somewhere real. Two Nobel-recognized researchers, two founding figures of modern AI, now publicly on opposite sides of the most important technical question in the industry.
→ View original post on X — @ceobillionaire, 2026-04-12 09:02 UTC
-
ASI Achievement Reached Internally
By
–
ASI has been achieved internally tbh
→ View original post on X — @ceobillionaire, 2026-04-12 04:12 UTC
-
EverMind Launches Agent Capability Benchmark Test
By
–
稍微剧透一下,
@EverMind 马上会推出一个 Benchmark,可测试 Agent 的能力。 诸位的 OpenClaw 和 Hermes Agent 都可以测测看。 -
Recursive Self-Improvement Coming to the Claw Soon
By
–
Recursive self-improvement is coming to the claw soon. 👁️🦞👁️💅
→ View original post on X — @ceobillionaire, 2026-04-12 00:28 UTC