Compute powers every layer of AI, and the investments we’ve made mean we can run more promising research experiments, train more capable models, and support broader access. @merettm talks about our progress building an automated AI researcher and what’s ahead as AI can take on… https://t.co/zM7iFZYAsK
— OpenAI Newsroom (@OpenAINewsroom) 9 avril 2026
Compute powers every layer of AI, and the investments we’ve made mean we can run more promising research experiments, train more capable models, and support broader access. @merettm talks about our progress building an automated AI researcher and what’s ahead as AI can take on harder and harder problems. Jacob Effron (@jacobeffron) At @OpenAI, Chief Scientist @merettm helps lead the research roadmap to AGI including a research intern-level AI system by September 2026 and a fully automated AI researcher by March 2028. I sat down with Jakub to check on those timelines and ask him all of my top-of-mind AI questions including: ▪️ How OpenAI thinks about extending RL beyond code and math ▪️ The current state of alignment research as more powerful models loom ▪️ The future of continual learning ▪️ How startups should think about building their own models/harnesses And he also shared some great stories around OpenAI’s pioneering work on math. YouTube: piped.video/vK1qEF3a3WM Spotify: bit.ly/4sjUyrN Apple: bit.ly/41jAdrN 0:00 Intro 1:53 Research Intern Capability Timelines 4:59 Math Breakthroughs 7:59 RL Beyond Verifiable Tasks 12:32 RL vs In-Context 19:01 Allocating Compute Internally 28:18 AI for Science 31:40 Pattern Matching 33:23 Solving the Hardest Math Problems 37:40 Chain of Thought Monitoring 44:33 Generalization and Value Alignment in Models 47:57 Inside OpenAI 51:55 Quickfire — https://nitter.net/jacobeffron/status/2042234897134162077#m
→ View original post on X — @ceobillionaire, 2026-04-09 23:39 UTC
Leave a Reply