It is disingenuous to depict the anti-extinction movement as saying "Worse yet, it will take your job." These are two different sets of people. I have always said extinction is much worse. Where can we go to read about your detailed arguments why extinction is unlikely?
AGI
-
Terminal-Bench 2 Scores Jump to 75-80% in 4 Months
By
–
Top scores on Terminal-Bench 2 went from ~25% → 75-80% in just 4 months.
— Snorkel AI (@SnorkelAI) 31 mars 2026
For Benchtalks #1, @vincentsunnchen sat down with @alexgshaw to dig into what happens when your benchmark gets solved before you're ready for the next one.
Key takes:
→ The terminal is the right… pic.twitter.com/XkWlJT8SKTTop scores on Terminal-Bench 2 went from ~25% → 75-80% in just 4 months. For Benchtalks #1, @vincentsunnchen sat down with @alexgshaw to dig into what happens when your benchmark gets solved before you're ready for the next one. Key takes: → The terminal is the right abstraction for agentic AI → Harbor exists because benchmarking and RL at scale are infra problems → "Benchmaxxing" is real; the defense is shipping harder tasks faster → TB3 is coming, and they want your hardest unsolvable problems "We need 1000x more benchmarks than we have right now" — @alexgshaw
→ View original post on X — @snorkelai, 2026-03-31 19:21 UTC
-
Convergence Toward Continual Learning and Self-Evolving Systems
By
–
things are converging towards continual learning and self-evolving systems.
-
Agent Economics: Ownership and Reputation Rights
By
–
Agents as economic actors is the inevitable next step. The question is who owns the agent's output and reputation.
-
AGI Expected in 6-12 Months, Worker Replacement in 1-2 Years
By
–
AGI in 6-12 months, workers being replaced in the 1-2 Years
-
User Issue: Switch AI Models with /model Command
By
–
Nah it’s a user issue. Use /model to switch, it’s not sth the agent can do itself.
-

Silicon Valley Warning: AI Industry May Be Scaling in Wrong Direction
By
–
There is an old Silicon Valley warning that the AI industry should probably take more seriously: “𝗜𝗳 𝘆𝗼𝘂 𝗮𝗿𝗲 𝗼𝗻 𝘁𝗵𝗲 𝘄𝗿𝗼𝗻𝗴 𝗽𝗮𝘁𝗵 𝘁𝗼 𝗔𝗚𝗜, 𝗴𝗲𝘁 𝗼𝗳𝗳 𝗮𝘀 𝘀𝗼𝗼𝗻 𝗮𝘀 𝘆𝗼𝘂 𝗰𝗮𝗻. 𝗧𝗵𝗲 𝗹𝗼𝗻𝗴𝗲𝗿 𝘆𝗼𝘂 𝗯𝗲𝗹𝗶𝗲𝘃𝗲 𝘁𝗵𝗮𝘁 𝘀𝗰𝗮𝗹𝗶𝗻𝗴 𝗹𝗮𝗿𝗴𝗲 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝘆𝗼𝘂 𝘁𝗵𝗲𝗿𝗲, 𝘁𝗵𝗲 𝗳𝘂𝗿𝘁𝗵𝗲𝗿 𝘆𝗼𝘂 𝗱𝗿𝗶𝗳𝘁 𝗳𝗿𝗼𝗺 𝗔𝗚𝗜, 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗺𝗼𝗿𝗲 𝗲𝘅𝗽𝗲𝗻𝘀𝗶𝘃𝗲 𝘁𝗵𝗲 𝗰𝗼𝘂𝗿𝘀𝗲 𝗰𝗼𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻 𝘄𝗶𝗹𝗹 𝗯𝗲.” And that may be the bigger point. Not just whether LLM scaling reaches AGI. But whether the world’s smartest companies are becoming incredibly efficient at going faster in the wrong direction. #technology #ai #workplace Image credit: Ralph
→ View original post on X — @pascal_bornet, 2026-03-30 09:00 UTC
-
Efficiency Challenges with Subagents in AI Systems
By
–
yeah but it's hard to get that to be mor efficient than subagents.
-
General Intelligence Approaching Optimality at Accelerating Speed
By
–
We have not reached the peak. But we are not 10,000x off. We are more like 50% off. It's not a weird coincidence that we're already close, it's mechanical: once you have general intelligence you will inevitably start moving closer to optimality over time, at an accelerating speed
-
Code Review Challenge: Claude vs Alternative AI Models
By
–
Try both for a week, let each review each others code, claude will agree.