GPT-4 claims to effortlessly perform 20 human jobs – from writing novels to diagnosing diseases. Will AI soon render many professions obsolete? Exciting or daunting? You decide. Read more: https://
bit.ly/3Z8TMzf @OpenAI @sama @gdb @milinmiluz @_DigitalIndia @GoI_MeitY
AGI
-
GPT-4 Could Replace 20 Human Jobs: Future Workplace Impact
By
–
-
Global AI and Economic Shifts Overshadow Domestic French Politics
By
–
Xi Jinping en visite d’état en Russie, crise bancaire, inflation et remontée des taux, tsunami de l’IA… Le monde change à une vitesse vertigineuse, et nous regardons ailleurs. Quelle tragédie de voir la France se fracturer sur un sujet aussi dérisoire que la retraite à 64 ans !
-
Follow Your Passion: Why Genuine Interest Matters in AI
By
–
last piece of advice: – do what you love. don’t work on AI because it’s hyped and it’s all everyone can talk about. that will all go away. work on AI because it electrifies you and because you cannot think about anything else.
-
Eliezer’s Horror at OpenAI’s Early Actions and MIRI’s Strategic Silence
By
–
I was horrified at the time. Couldn't say anything because the much tinier MIRI could have been easily targeted and squashed if it'd openly opposed OpenAI and its founders/funders at the time, but I was horrified at the time. Unlikely to be confabulating the memory because I
-
Truthseeking in AGI Development: Evaluating Expected Outcome Shifts
By
–
Or do you mean maximum truthseeking in the people thinking about and building AGI? If so, I'd ask you whether you very carefully and neutrally evaluated exactly how much expected outcome shift could be produced that way – which is what truthseeking looks like, in humans.
-
AI Safety vs AGI Existential Risk: Critical Distinction Needed
By
–
I hope like hell that you are distinguishing "AI safety" from "AGI notkilleveryoneism", because what you're describing may be one component of a solution to the Prude Corporatespeak syndrome in chatbots, but not to extremely smart AGIs killing everyone.
-
AI Takeoff Timelines and Progress from GPT-3 to GPT-4
By
–
I've updated on "stuff takes generally longer and sticks around longer in interestingly weird territory" since 2020. I have yet to hear any good definition of a "years-long takeoff", unless you mean stuff like "it takes years to get from GPT-3 to GPT-4" and we're inside the
-
AI Intelligence Gap: Why Humanity Extinction Requires Superhuman Capabilities
By
–
The basic obstacle is that wiping out humanity and establishing a new power grid, isn't actually easy for normies using only current technologies and technologies that normies intuitively understand to be easily possible. Without the part where the AIs are smarter – doing at
-
Fake It Until You Make It: The AGI Development Philosophy
By
–
Fake it (AGI) until you make it (AGI)
-
AGI survival policies and authoritarian opportunism differences
By
–
The only reason this is a useful thing to care about is if the opportunistic authoritarians are pushing policies that wouldn't actually help humanity survive AGI. So that's a visible difference right there; they'll push different policies. The anti-doom faction will say, for