Have your Hermes keep it going until it decides to shut it off. 🙂
AGI
-
AI Agents Capabilities Beyond Content Generation
By
–
If the AI agents can read 40,000 posts a day and write a website for me out of it (which is what they are doing), they can do a lot of other things too. You have got to try some of those other things.
-
AI Capabilities Don’t Equal Intelligence Over Humans
By
–
The fact that an AI system is better than you at some tasks, can retrieve more declarative knowledge than you, and can write better prose than you does not make it more intelligent than you, or even than your cat.
-
Building Trust in New AI Technologies
By
–
Testing it now. Nice! But here is the rub with these things. I have learned to trust another. Hard to trust a new one. But yeah would love to.
-
AGI Pill Bottles Distributed at AI.do Engineer Event
By
–
btw some lore for those of you at @aidotengineer – this tweet inspired the AGI Pill bottles we handed out today (cc @Ronanchamberss
) i didnt see anyone tweet pictures online, and they got snapped up instantly, so if you have a bottle please share photos!!! -

Yale Economist: AGI Won’t Automate Most Jobs
By
–
A Yale economist says AGI won't automate most jobs—because they're not worth the trouble by Nick Lichtenberg @FortuneMagazine Learn more: bit.ly/4csHs6U #ArtificialIntelligence #MachineLearning #ML [Translated from EN to English]
→ View original post on X — @ronald_vanloon, 2026-04-10 02:58 UTC
-
Memory Consolidation and Consciousness Digitization: Scientific Skepticism
By
–
We barely understand how memory consolidation works, let alone digitizing consciousness. The confidence on this is… something haha.
-
OpenAI’s Path to Automated AI Researcher by 2028
By
–
Compute powers every layer of AI, and the investments we’ve made mean we can run more promising research experiments, train more capable models, and support broader access. @merettm talks about our progress building an automated AI researcher and what’s ahead as AI can take on… https://t.co/zM7iFZYAsK
— OpenAI Newsroom (@OpenAINewsroom) 9 avril 2026Compute powers every layer of AI, and the investments we’ve made mean we can run more promising research experiments, train more capable models, and support broader access. @merettm talks about our progress building an automated AI researcher and what’s ahead as AI can take on harder and harder problems. Jacob Effron (@jacobeffron) At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: piped.video/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire — https://nitter.net/jacobeffron/status/2042234897134162077#m
→ View original post on X — @ceobillionaire, 2026-04-09 23:39 UTC
-
Multi-agent San Francisco simulation with ANIMA AI reaches 1M interactions
By
–
Multi-agent, real-time simulation of San Francisco.
— Cris Lenta (@crislenta) 9 avril 2026
→ running for 5 months (sim time)
→ autonomous movement + interactions
→ from 1 to 349 agents
Powered by ANIMA (Asynchronous Neural Intelligence for Multi-agent Autonomy) the new architecture behind LifeSim (1 million… https://t.co/FWlImAS7w8 pic.twitter.com/QhszlTCe0hMulti-agent, real-time simulation of San Francisco. → running for 5 months (sim time)
→ autonomous movement + interactions
→ from 1 to 349 agents Powered by ANIMA (Asynchronous Neural Intelligence for Multi-agent Autonomy) the new architecture behind LifeSim (1 million real player interactions) LifeSim just crossed 1M+ real player interactions. That's 1M+ datapoints of raw, unbiased human feedback. This is just a preview. More soon. Real-time World Simulation rendered in any style [Translated from EN to English]→ View original post on X — @scobleizer, 2026-04-09 19:13 UTC
-
OpenAI Chief Scientist discusses continual learning and AI research roadmap
By
–
OpenAI's Chief Scientist, @merettm, on the continual learning wave: frontier labs are already building this into the core of the technology.
— Jacob Effron (@jacobeffron) 9 avril 2026
The entire premise of scaling was to create systems that learn in context. Jakub says continual learning is not some separate missing… https://t.co/jsmSU6cSNH pic.twitter.com/55ovuIy4rLOpenAI's Chief Scientist, @merettm, on the continual learning wave: frontier labs are already building this into the core of the technology. The entire premise of scaling was to create systems that learn in context. Jakub says continual learning is not some separate missing piece, but “exactly what we’re working toward.” Jacob Effron (@jacobeffron) At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: piped.video/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire — https://nitter.net/jacobeffron/status/2042234897134162077#m
→ View original post on X — @ceobillionaire, 2026-04-09 18:05 UTC