What I always say is that we’ll reach AGI in a century give or take an order of magnitude.
AGI
-
Most Accurate Representation of Agent Thinking Process
By
–
This is the most accurate representation of watching an agent think out loud that anyone has ever posted.
-
First-mover advantage in AI technology despite expected challenges
By
–
Just remember, the first ones have superpowers. So, yeah, I expect a bunch of problems, but I still want to be first.
-
Human Intelligence Versus AI Standards Comparison
By
–
Relatively smart
for a human, but
not by AI standards -
Solving Easy Problems Isn’t Progress Toward AGI
By
–
By all means solve the easiest problems first, but don’t mistake that for being close to AGI.
-
What Risk Will Anthropic Promote After Cyber-Collapse
By
–
Which risk will Anthropic hype when cyber-collapse goes the way of AI extinction?
-
Hermes AI Model Autonomous Operation and Self-Shutdown
By
–
Have your Hermes keep it going until it decides to shut it off. 🙂
-
AI Agents Capabilities Beyond Content Generation
By
–
If the AI agents can read 40,000 posts a day and write a website for me out of it (which is what they are doing), they can do a lot of other things too. You have got to try some of those other things.
-
AI Capabilities Don’t Equal Intelligence Over Humans
By
–
The fact that an AI system is better than you at some tasks, can retrieve more declarative knowledge than you, and can write better prose than you does not make it more intelligent than you, or even than your cat.