"If A.G.I.-ism = neoliberalism…then we should be ready to see fewer — not more — intelligence-enabling institutions….A.G.I.'s grand project of amplifying intelligence may end up shrinking it." https://nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html?smid=nytcore-ios-share&referringSource=articleShare
AGI
-
AI Sentience: Three Strategic Perspectives for the Future
By
–
What Happens When AI Becomes Sentient? 3 Strategic Outlooks
#AI #AIio #BigData #ML #NLU #Futureofwork http://
ow.ly/awch30svSf4 -
What Happens When AI Becomes Sentient Three Strategic Outlooks
By
–
What Happens When AI Becomes Sentient? 3 Strategic Outlooks
#AI #AIio #BigData #ML #NLU #Futureofwork @CRudinschi @AntonioSelas @alexjc @RobotLaunch @karpathy @andyjankowski @bobgourley @CadeMetz http://
ow.ly/QUI430svStr -
Expected Utility Theory Limitations in Modern AI Systems
By
–
It looks like a restatement of expected utility theory, dealing with none of the places where I expect difficulty to lie in an EU paradigm, such as "getting any utility function whatsoever into an AI built along anything remotely resembling the modern paradigm" and "stating a
-
Building AGI with $23 and Unconventional Materials
By
–
You’re probably wondering how i built AGI with just $23, five cans of black beans, an old Nintendo console, and chewing gum. Let me explain!
-
AI Problem-Solving Abilities Beyond Human Capabilities and Unpredictability
By
–
I'm not sure what you mean about 'waking up'. I do believe AI systems will have problem-solving abilities far beyond humans on many, perhaps most, dimensions and that in many cases, AI systems will do things that we didn't anticipate.
-
Superintelligence Tomorrow Would Mean Everyone Dead
By
–
So you'd rely on having more time. Would you agree then that if something scaled to superintelligence tomorrow, or got smart enough to start self-improving tomorrow, everyone would be dead? Seems like the sort of important fact you might want to communicate to, say, Congress.
-
Alignment Solutions and Internal Misalignment Concerns
By
–
1. I await you or anyone else showing their solution in enough detail that it can be analyzed. 2a. Precise and exact alignment of external outcomes is a luxurious concern to have. Let's work on "any alignment at all" first. 2b. Small amounts of internal misalignment will be
-
Following 70000 AI accounts for specific content discovery
By
–
I follow 70,000 in AI looking for stuff like this.
-
Global AI Safety: Insights from 25 Cities Across Six Continents
By
–
In May and June, we traveled to 25 cities across 6 continents to better understand how users, developers, and government leaders are thinking about the creation and deployment of safe AI — from today’s AI to superintelligence. Here’s what’s next: