A year ago this would've been a PhD thesis.
AGI
-
AGI Pills launched to combat scaling skepticism and inductive bias
By
–
Just launched at @aiDotEngineer :
— swyx π£ (@swyx) 10 avril 2026
our official AGI Pills!
prescribe one (1) if your colleague is saying we are hitting a wall and/or trying to add inductive bias instead of Trusting The Model https://t.co/fNeUQ8DC9H pic.twitter.com/MJUEOoMkgZJust launched at @aiDotEngineer : our official AGI Pills! prescribe one (1) if your colleague is saying we are hitting a wall and/or trying to add inductive bias instead of Trusting The Model
-
AGI Timeline: A Century Give or Take an Order of Magnitude
By
–
What I always say is that weβll reach AGI in a century give or take an order of magnitude.
-
Most Accurate Representation of Agent Thinking Process
By
–
This is the most accurate representation of watching an agent think out loud that anyone has ever posted.
-
Humans Are More Complex Than Fully Controlled Bits
By
–
Humans are complicated. Much more than bits we fully control
-
First-mover advantage in AI technology despite expected challenges
By
–
Just remember, the first ones have superpowers. So, yeah, I expect a bunch of problems, but I still want to be first.
-
Human Intelligence Versus AI Standards Comparison
By
–
Relatively smart
for a human, but
not by AI standards -
Solving Easy Problems Isn’t Progress Toward AGI
By
–
By all means solve the easiest problems first, but donβt mistake that for being close to AGI.
-
What Risk Will Anthropic Promote After Cyber-Collapse
By
–
Which risk will Anthropic hype when cyber-collapse goes the way of AI extinction?