All of those scenarios kill you because none of these groups has the ability to align any superintelligence to do anything, nor are on track to acquire it; and fighting over *who* gets the pride of choicelessly destroying the world seems to me sad and also a very modern evil.
AGI
-
HAL 9000: AI Alignment and Safety in 2001’s Visionary Film
By
–
A few notes on 2001's AI: -HAL is a wink to IBM (H+1 A+1 L+1=IBM). IBM's DeepBlue was "predicted" in the movie when HAL beat Frank at chess : )
-Conversational AI is at the center of current research.
-As AI advances, topics such as alignment, safety, etc., become critical. -
2001 A Space Odyssey AI and Human Intelligence Exploration
By
–
2001: A Space Odyssey appreciation I just rewatched 2001 for the n-th time (n>20). This movie was hugely ahead of its time, so many cool topics: -Origin of (human) intelligence -Pushing frontiers of space exploration -And, of course, AI
-
Benefits of Human-Like AI vs Other Powerful AI Systems
By
–
2/ The benefits of human-like AI (HLAI) include soaring productivity, increased leisure, and a better understanding of our own minds. But not all AI is human-like – many of the most powerful systems are very different from us.
-
Alan Turing’s Imitation Game and the Quest for Human-Level AI
By
–
1/ In 1950, Alan Turing proposed the "imitation game" as a test for AI – could it imitate a human so well that its answers were indistinguishable from a human's? Since then, creating AI that matches human intelligence has been a goal.
-
AGI Risk and Cybersecurity Threats Within Next Decade
By
–
i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too.
-
AI Value Alignment: Users vs Creators Intent Debate
By
–
interesting watching people start to debate whether powerful AI systems should behave in the way users want or their creators intend. the question of whose values we align these systems to will be one of the most important debates society ever has.
-
ChatGPT capabilities debate exponential AI progress trajectory
By
–
interesting to me how many of the ChatGPT takes are either "this is AGI" (obviously not close, lol) or "this approach can't really go that much further". trust the exponential. flat looking backwards, vertical looking forwards.
-
Deep RL at Peak Hype in 2016, AGI Hopes
By
–
Deep RL was at peak hype during NIPS2016. Everyone thought training a ConvNet to play a few Atari games using Q-Learning would lead to AGI.
-
Adjacent Impossible and AGI-Hard: Categorizing AI Progress
By
–
There's two lists to maintain – i've been calling it the "Adjacent Impossible" (stuff that it'll probably do soon) and "AGI-hard" (after @tszzl
) https://
lspace.swyx.io/p/agi-hard