I agree and disagree with many things in this blog post, but as someone that hired a full team recently and had thousands of applications everywhere (that even bled into my instagram DMs 😅), I thought I shed some perspective on this. I think generally there is a swarm of people wanting to get into the cutting edge in AI. I sympathize, it's a really competitive time. I always tell people that most people could actually perform reasonably on the job, but the issue these days is more of how to stand out from other candidates (i.e., be exceptional) People have a different calibration on what it takes to get into these frontier teams. For PhDs, most of the people I hired are people whom I've read their work, got impressed and DM-ed these people first (it's even more impressive considering how many inbounds i get, if you think about it 🙂). I think many of my friends who are some of the best AI researchers today also can relate to this, someone saw their work, took a chance on them and gave them the life they have today. Everyone "established" had someone took a chance on them at some point. While we debate about the point of a PhD (or academic research in general) in an age where industry is significantly ahead (and opaque), a PhD still gives folks a chance to demonstrate their research taste and engineering skills – I think it's still a reasonably good training ground. It's probably not time optimal or financially optimal, but reasonable for many people as a platform. This blog post has a point, i.e., because if the point of a PhD (or undergraduate degree) is to get a good job at the frontier, then you should definitely not give up the movie for the ticket. Just go for the job. It's a no brainer. The thing that is way harder for me to give advice is people who want to transition from "generic SWE" roles into AI modeling. These cases are less clear cut for me since I've been mostly a researcher. I think super strong engineering/coding skills generally come with some kind of artifact or symptom that is quite publicly visible. Someone else will talk about it, or like the blogpost mentions, some kind of technical feat may become visible through the likes of open source, or the internet i.e., "You can actually just do things." For this, I agree that open source is a good way to do something exceptional or that stands out. The thing I really disagree with in this blogpost is the notion of "senior" and "junior". My hot take here is that we're in a level agnostic world that seniority hardly matters. People operate in a continuous realm mostly conditioned on their base talent and natural inclinations. In today's LLM world, I think pretty much L3-L8 means the same thing if you are an IC. The world is flat now and I don't think the *same* person is any more likely to make a research breakthrough if they are at L6 vs L4. There is this whole section in this blog post about bashing the lack of external visibility in closed labs. I think this is a wrong mindset and its kind of funny if you ask me. Being at the literal frontier and the closest to AGI is probably the most valuable thing these days, not social media clout (as ironic as it is for me to say this 😅.) Sadly, there is just no "winning combination" if you're not at the cutting edge lol. But ok, to each their own. And also at this point, not everyone wants to be external visible and there is also some blessing associated with that. Another slightly harmful advice in this post that I want to call out: "A small but clear negative signal is a junior researcher being a middle author on too many papers". This is straight out pretty bad advice. I think there is nothing wrong to "support" many projects as long as you make meaningful technical contributions. The need to obsess over first author contributions and first author work is the thing that is making researchers uncomfortable in the modern AI paradigm where the focus is on big group execution and unified efforts. I once tweeted "be the third author" and everyone lost their minds. Obviously don't be the guy who doesn't do anything to land in the middle. but contribute a ton AND land in the middle. That is OK! We are in the era where we have to work with people (and many people) and being low ego is a good thing. To this end, the academic mindset can be pretty counter intuitive and harmful and no you dont' have to always "own and lead" a project. Be a useful human being, contribute and work with a ton people. That is what AGI is about. On an ending note, the blog post opens with "On the hiring side, it often feels impossible to close, or even get interest from, the candidates you want". I read this and laughed, 😂, because I never experienced anything close to this. Hehe. 😇 Nathan Lambert (@natolambert) My raw thoughts on the job market — both for those hiring and those searching — at the cutting edge of AI. interconnects.ai/p/thoughts-… — https://nitter.net/natolambert/status/2017264336960721285#m
Hiring AI Researchers: Insights on Standing Out and Career Paths
By
–
Leave a Reply