I am following 70,000 in AI here. Doing that till years. No way in hell is that being rebuilt. That said I will try it for sure.
AGI
-
AGI and Neoliberalism: Risk of Intelligence Reduction
By
–
"If A.G.I.-ism = neoliberalism…then we should be ready to see fewer — not more — intelligence-enabling institutions….A.G.I.'s grand project of amplifying intelligence may end up shrinking it." https://nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html?smid=nytcore-ios-share&referringSource=articleShare
-
AI Sentience: Three Strategic Perspectives for the Future
By
–
What Happens When AI Becomes Sentient? 3 Strategic Outlooks
#AI #AIio #BigData #ML #NLU #Futureofwork http://
ow.ly/awch30svSf4 -
What Happens When AI Becomes Sentient Three Strategic Outlooks
By
–
What Happens When AI Becomes Sentient? 3 Strategic Outlooks
#AI #AIio #BigData #ML #NLU #Futureofwork @CRudinschi @AntonioSelas @alexjc @RobotLaunch @karpathy @andyjankowski @bobgourley @CadeMetz http://
ow.ly/QUI430svStr -
Expected Utility Theory Limitations in Modern AI Systems
By
–
It looks like a restatement of expected utility theory, dealing with none of the places where I expect difficulty to lie in an EU paradigm, such as "getting any utility function whatsoever into an AI built along anything remotely resembling the modern paradigm" and "stating a
-
AI Problem-Solving Abilities Beyond Human Capabilities and Unpredictability
By
–
I'm not sure what you mean about 'waking up'. I do believe AI systems will have problem-solving abilities far beyond humans on many, perhaps most, dimensions and that in many cases, AI systems will do things that we didn't anticipate.
-
Superintelligence Tomorrow Would Mean Everyone Dead
By
–
So you'd rely on having more time. Would you agree then that if something scaled to superintelligence tomorrow, or got smart enough to start self-improving tomorrow, everyone would be dead? Seems like the sort of important fact you might want to communicate to, say, Congress.
-
Alignment Solutions and Internal Misalignment Concerns
By
–
1. I await you or anyone else showing their solution in enough detail that it can be analyzed. 2a. Precise and exact alignment of external outcomes is a luxurious concern to have. Let's work on "any alignment at all" first. 2b. Small amounts of internal misalignment will be
-
Following 70000 AI accounts for specific content discovery
By
–
I follow 70,000 in AI looking for stuff like this.
-
The Unsolved Problem of Defining Low Impact for Superintelligence
By
–
I don't think either of these have been falsified. We still don't know how to say "low impact" in a way that holds up to a superintelligence, and if you think ChatGPT demonstrates otherwise then you didn't understand the original problem.