AI Dynamics

Global AI News Aggregator

@yitayml

  • Aletheia: AI Math Research Agent Solves ErdΕ‘s Problems
    Aletheia: AI Math Research Agent Solves ErdΕ‘s Problems

    Introducing Aletheia, a math research agent powered by an advanced version of Gemini Deep Think that produces publishable math research (two papers, one completely automatic and another with human-AI collaboration) and solved multiple open ErdΕ‘s problems. πŸ˜€πŸ”₯ Paper link below! πŸ‘‡

    β†’ View original post on X β€” @yitayml, 2026-02-12 00:58 UTC

  • Google DeepMind Recruits Top Talent for AGI Research in Singapore
    Google DeepMind Recruits Top Talent for AGI Research in Singapore

    Something very special is happening on this island. We're building AGI, the first frontier lab in Asia/SG, and also gathering the best people around in one team. Welcome @zzlccc to join the AGI enjoyers club! Looking forward to cooking with you! Zichen Liu (@zzlccc) Thrilled to share that I’ve joined @GoogleDeepMind to work on Gemini post-training! I feel incredibly fortunate to be cooking on this sunny island under @YiTayML's leadership, within @quocleix's broader organization. Looking forward to enjoying RL research and pushing the frontiers of Gemini alongside such a brilliant team! β€” https://nitter.net/zzlccc/status/2020869034045231602#m

    β†’ View original post on X β€” @yitayml, 2026-02-09 14:47 UTC

  • Using Mostafa Dehghani as an Oracle for Candidate Evaluation

    Also you know how I tell if a person is a good candidate, I ask my oracle @m__dehghani πŸ˜€πŸ˜Ž

    β†’ View original post on X β€” @yitayml, 2026-02-01 05:55 UTC

  • Hiring AI Researchers: Insights on Standing Out and Career Paths

    I agree and disagree with many things in this blog post, but as someone that hired a full team recently and had thousands of applications everywhere (that even bled into my instagram DMs πŸ˜…), I thought I shed some perspective on this. I think generally there is a swarm of people wanting to get into the cutting edge in AI. I sympathize, it's a really competitive time. I always tell people that most people could actually perform reasonably on the job, but the issue these days is more of how to stand out from other candidates (i.e., be exceptional) People have a different calibration on what it takes to get into these frontier teams. For PhDs, most of the people I hired are people whom I've read their work, got impressed and DM-ed these people first (it's even more impressive considering how many inbounds i get, if you think about it πŸ™‚). I think many of my friends who are some of the best AI researchers today also can relate to this, someone saw their work, took a chance on them and gave them the life they have today. Everyone "established" had someone took a chance on them at some point. While we debate about the point of a PhD (or academic research in general) in an age where industry is significantly ahead (and opaque), a PhD still gives folks a chance to demonstrate their research taste and engineering skills – I think it's still a reasonably good training ground. It's probably not time optimal or financially optimal, but reasonable for many people as a platform. This blog post has a point, i.e., because if the point of a PhD (or undergraduate degree) is to get a good job at the frontier, then you should definitely not give up the movie for the ticket. Just go for the job. It's a no brainer. The thing that is way harder for me to give advice is people who want to transition from "generic SWE" roles into AI modeling. These cases are less clear cut for me since I've been mostly a researcher. I think super strong engineering/coding skills generally come with some kind of artifact or symptom that is quite publicly visible. Someone else will talk about it, or like the blogpost mentions, some kind of technical feat may become visible through the likes of open source, or the internet i.e., "You can actually just do things." For this, I agree that open source is a good way to do something exceptional or that stands out. The thing I really disagree with in this blogpost is the notion of "senior" and "junior". My hot take here is that we're in a level agnostic world that seniority hardly matters. People operate in a continuous realm mostly conditioned on their base talent and natural inclinations. In today's LLM world, I think pretty much L3-L8 means the same thing if you are an IC. The world is flat now and I don't think the *same* person is any more likely to make a research breakthrough if they are at L6 vs L4. There is this whole section in this blog post about bashing the lack of external visibility in closed labs. I think this is a wrong mindset and its kind of funny if you ask me. Being at the literal frontier and the closest to AGI is probably the most valuable thing these days, not social media clout (as ironic as it is for me to say this πŸ˜….) Sadly, there is just no "winning combination" if you're not at the cutting edge lol. But ok, to each their own. And also at this point, not everyone wants to be external visible and there is also some blessing associated with that. Another slightly harmful advice in this post that I want to call out: "A small but clear negative signal is a junior researcher being a middle author on too many papers". This is straight out pretty bad advice. I think there is nothing wrong to "support" many projects as long as you make meaningful technical contributions. The need to obsess over first author contributions and first author work is the thing that is making researchers uncomfortable in the modern AI paradigm where the focus is on big group execution and unified efforts. I once tweeted "be the third author" and everyone lost their minds. Obviously don't be the guy who doesn't do anything to land in the middle. but contribute a ton AND land in the middle. That is OK! We are in the era where we have to work with people (and many people) and being low ego is a good thing. To this end, the academic mindset can be pretty counter intuitive and harmful and no you dont' have to always "own and lead" a project. Be a useful human being, contribute and work with a ton people. That is what AGI is about. On an ending note, the blog post opens with "On the hiring side, it often feels impossible to close, or even get interest from, the candidates you want". I read this and laughed, πŸ˜‚, because I never experienced anything close to this. Hehe. πŸ˜‡ Nathan Lambert (@natolambert) My raw thoughts on the job market — both for those hiring and those searching — at the cutting edge of AI. interconnects.ai/p/thoughts-… β€” https://nitter.net/natolambert/status/2017264336960721285#m

    β†’ View original post on X β€” @yitayml, 2026-02-01 05:33 UTC

  • Enjoying and Having Fun: The Secret to AGI Success

    Enjoying is the secret sauce to AGI Logan Kilpatrick (@OfficialLoganK) my competitive advantage is that i'm having fun β€” https://nitter.net/OfficialLoganK/status/2010943408735408334#m

    β†’ View original post on X β€” @yitayml, 2026-01-13 06:01 UTC

  • Introspective blog post about returning to Google DeepMind

    Also made a more introspective blogpost for those interested yitay.net/blog/my-year-back-…

    β†’ View original post on X β€” @yitayml, 2026-01-01 03:48 UTC