OpenAI's Chief Scientist, @merettm, on the continual learning wave: frontier labs are already building this into the core of the technology.
— Jacob Effron (@jacobeffron) 9 avril 2026
The entire premise of scaling was to create systems that learn in context. Jakub says continual learning is not some separate missing… https://t.co/jsmSU6cSNH pic.twitter.com/55ovuIy4rL
OpenAI's Chief Scientist, @merettm, on the continual learning wave: frontier labs are already building this into the core of the technology. The entire premise of scaling was to create systems that learn in context. Jakub says continual learning is not some separate missing piece, but “exactly what we’re working toward.” Jacob Effron (@jacobeffron) At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: piped.video/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire — https://nitter.net/jacobeffron/status/2042234897134162077#m
→ View original post on X — @ceobillionaire, 2026-04-09 18:05 UTC
Leave a Reply