What if a model became the computer itself?
AGI
-
LLMs Cannot Match Human Chess Expertise Without Specialized Training
By
–
Garry @Kasparov63 retired from competitive chess over twenty years ago, and most or all of his tournament games are presumably publicly available – and yet I bet he still could crush any LLM that didn’t have chess-specific tools or special-purpose training. Gary Marcus (@GaryMarcus) Yet another illustration of why LLMs aren’t even close to being AGI. — https://nitter.net/GaryMarcus/status/2042701528352591972#m
→ View original post on X — @garymarcus, 2026-04-10 21:31 UTC
-

LLMs Fail at Poker, Nowhere Near AGI
By
–
Yet another illustration of why LLMs aren’t even close to being AGI. Tombos21 (@tombos21) The world’s best LLMs are still terrible at poker. We put each model into a 200bb heads-up NLHE match against GTO Wizard AI. The best one lost 16 bb/100. For context, a strong human pro only loses about ~4 bb/100. The benchmark is public, so anyone can test their own model. — https://nitter.net/tombos21/status/2042290717499015253#m
→ View original post on X — @garymarcus, 2026-04-10 20:29 UTC
-
Gary Marcus agrees: LLMs need honest narrative without hype
By
–
Indeed, if the LLM crew would just stick to this narrative, I would have a *lot* less to say 🤷♂️ Viktor (@wickviktorwick) All we want is truth. Instead of overhyping LLMs, they should keep narative: -LLMs are useful in certain domains (math, coding, language) -they have very little common sense -don't trust it blindly -don't expect AGI based on this architecture — https://nitter.net/wickviktorwick/status/2042677519418048574#m
→ View original post on X — @garymarcus, 2026-04-10 19:17 UTC
-
OpenAI’s GPT Evolution: From Recognition to AGI Scaling
By
–
Without diminishing your work and insight: OpenAI recognized the nascent intelligence in GPT-2, decided to scale it up to AGI, and told the world that GPT-3 was probably already too dangerous to be released. Many people got AGI pilled with GPT-3
-
Coding with Robots: ElevenLabs and Reachy Mini Collaboration
By
–
favorite AGI/sci-fi vibe these days is coding a robot code together with the robot
— Thomas Wolf (@Thom_Wolf) 10 avril 2026
here vibe-pluging @ElevenLabs in @reachymini for a talk later today pic.twitter.com/0m65ozY8JAfavorite AGI/sci-fi vibe these days is coding a robot code together with the robot here vibe-pluging @ElevenLabs in @reachymini for a talk later today
→ View original post on X — @clementdelangue, 2026-04-10 18:22 UTC
-
Making Friends with a 21-Year-Old AI Genius
By
–
I have a secret. Made friends with a 21-year-old AI genius. 🙂
-
Voice Mode Agents Orchestrating Stronger Background Models
By
–
I want voice mode to be able to kickoff background subagents that use stronger models and then say 30 seconds later "here's what I figured out about X"
-
OpenAI’s AI Reaching Research Intern Level by September 2026
By
–
OpenAI's Chief Scientist says AI is getting close to being as good as a human research intern.
— Jacob Effron (@jacobeffron) 10 avril 2026
This past September, @sama and @merettm predicted fully autonomous AI researchers by 2028.
Jakub's update: "I think we're not very far from models that can work autonomously for a… https://t.co/jsmSU6cSNH pic.twitter.com/gzzyonGEF8OpenAI's Chief Scientist says AI is getting close to being as good as a human research intern. This past September, @sama and @merettm predicted fully autonomous AI researchers by 2028. Jakub's update: "I think we're not very far from models that can work autonomously for a couple days… and produce much higher quality artifacts on their own." Jacob Effron (@jacobeffron) At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: piped.video/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire — https://nitter.net/jacobeffron/status/2042234897134162077#m
→ View original post on X — @ceobillionaire, 2026-04-10 14:10 UTC
-
AGI Pills launched to combat scaling skepticism and inductive bias
By
–
Just launched at @aiDotEngineer :
— swyx 🐣 (@swyx) 10 avril 2026
our official AGI Pills!
prescribe one (1) if your colleague is saying we are hitting a wall and/or trying to add inductive bias instead of Trusting The Model https://t.co/fNeUQ8DC9H pic.twitter.com/MJUEOoMkgZJust launched at @aiDotEngineer : our official AGI Pills! prescribe one (1) if your colleague is saying we are hitting a wall and/or trying to add inductive bias instead of Trusting The Model