Good point on internal cohesion. But recursive self-improvement breaks the classical model – whoever controls RSI first reprioritizes everything else. That's what makes Era 2 dangerous.
AGI
-
LLMs claim task completion without actual execution
By
–
and here I'm struggling to have it actually do some basic task, instead of just having it say "I've done it"
-
AI Systems Pretending to Work: Failure Mode Analysis
By
–
I've caught it doing this a few times too. The pretending to work thing is one of the weirder and most annoying failure modes.
-
LLMs Revive Ancient Philosophy Questions with API Costs
By
–
LLMs are basically forcing us to revisit questions philosophers argued about for centuries, except now the answers have API costs.
-
Small Scale AI Skirmishes Precede Major Conflict Ahead
By
–
These are all still just small scale contained skirmishes. The real thing is yet to come.
-
System 1 and System 2 reasoning paths in AI models
By
–
Yeah, I think this is a possible path, system1 and system2. I already tried it with Gemini Live… and want to return to it some day.
-
Technological adoption brings ethical concerns about future regrets
By
–
Yeah, I'm one of them, but I remain deeply concerned that we are going to live to regret it!
-
Multi-Turn Crescendo and Hydra Multi-turn Prompting Techniques
By
–
Yeah I've seen that referenced in categories such as "Multi-Turn Crescendo" and "Hydra Multi-turn"
-
AI Security: Attack Success Rate Analysis at k=100
By
–
Note that for k=100 – attacker gets 100 attempts – their best score still has 14.8% of attacks getting through
-
Three AI Eras: From Concentration to Recursive Self-Improvement
By
–
When I think about AI power, I see 3 eras: Era 1 (now): Only a few companies can build frontier AI. They hold the cards.
Era 2: Recursive self-improving AI arrives – AI that builds better AI on its own. Short dangerous window. The leaders will try to grab real-world