I was horrified at the time. Couldn't say anything because the much tinier MIRI could have been easily targeted and squashed if it'd openly opposed OpenAI and its founders/funders at the time, but I was horrified at the time. Unlikely to be confabulating the memory because I
AGI
-
Truthseeking in AGI Development: Evaluating Expected Outcome Shifts
By
–
Or do you mean maximum truthseeking in the people thinking about and building AGI? If so, I'd ask you whether you very carefully and neutrally evaluated exactly how much expected outcome shift could be produced that way – which is what truthseeking looks like, in humans.
-
AI Safety vs AGI Existential Risk: Critical Distinction Needed
By
–
I hope like hell that you are distinguishing "AI safety" from "AGI notkilleveryoneism", because what you're describing may be one component of a solution to the Prude Corporatespeak syndrome in chatbots, but not to extremely smart AGIs killing everyone.
-
AI Takeoff Timelines and Progress from GPT-3 to GPT-4
By
–
I've updated on "stuff takes generally longer and sticks around longer in interestingly weird territory" since 2020. I have yet to hear any good definition of a "years-long takeoff", unless you mean stuff like "it takes years to get from GPT-3 to GPT-4" and we're inside the
-
AI Intelligence Gap: Why Humanity Extinction Requires Superhuman Capabilities
By
–
The basic obstacle is that wiping out humanity and establishing a new power grid, isn't actually easy for normies using only current technologies and technologies that normies intuitively understand to be easily possible. Without the part where the AIs are smarter – doing at
-
Fake It Until You Make It: The AGI Development Philosophy
By
–
Fake it (AGI) until you make it (AGI)
-
AGI survival policies and authoritarian opportunism differences
By
–
The only reason this is a useful thing to care about is if the opportunistic authoritarians are pushing policies that wouldn't actually help humanity survive AGI. So that's a visible difference right there; they'll push different policies. The anti-doom faction will say, for
-
What Questions About AGI Existential Risk Do You Want Answered?
By
–
If I wrote an "AGI ruin FAQ", what Qs would you, yourself, personally, want answers for? Not what you think "should" be in the FAQ, what you yourself genuinely want to know; or Qs that you think have no good answer, but which would genuinely change your view if answered.
-
Superhuman AI impact on human decision-making processes
By
–
Superhuman AI is changing the game, but can it also change the way we make decisions? Find out how the rise of AI could impact human decision-making on a whole new level here: https://
bit.ly/42l7fH9 @PNASNews @__anjali__raja @_DigitalIndia @GoI_MeitY @nasscom @NeGD_GoI -
Différences entre l’IA étroite et l’IA générale
By
–
Artificial Narrow Intelligence (ANI) is application-specific AI programmed to perform singular tasks. Artificial General Intelligence (AGI) is the ability of machines to think, comprehend, learn & apply their intelligence to solve complex problems like humans. By @ingliguori #AI