I'm not sure what you mean about 'waking up'. I do believe AI systems will have problem-solving abilities far beyond humans on many, perhaps most, dimensions and that in many cases, AI systems will do things that we didn't anticipate.
AGI
-
Superintelligence Tomorrow Would Mean Everyone Dead
By
–
So you'd rely on having more time. Would you agree then that if something scaled to superintelligence tomorrow, or got smart enough to start self-improving tomorrow, everyone would be dead? Seems like the sort of important fact you might want to communicate to, say, Congress.
-
Alignment Solutions and Internal Misalignment Concerns
By
–
1. I await you or anyone else showing their solution in enough detail that it can be analyzed. 2a. Precise and exact alignment of external outcomes is a luxurious concern to have. Let's work on "any alignment at all" first. 2b. Small amounts of internal misalignment will be
-
Following 70000 AI accounts for specific content discovery
By
–
I follow 70,000 in AI looking for stuff like this.
-
The Unsolved Problem of Defining Low Impact for Superintelligence
By
–
I don't think either of these have been falsified. We still don't know how to say "low impact" in a way that holds up to a superintelligence, and if you think ChatGPT demonstrates otherwise then you didn't understand the original problem.
-
Three realizations about AI alignment difficulty and lack of progress
By
–
Realized that alignment was necessary, then realized that alignment was hard, then realized we were not on track to make it or even come close.
-
Prioritizing Risk Regulation: Drugs, Viruses, and AGI Research
By
–
Actually, I'd personally say it should be much easier to go sell a new drug that only injures voluntary customers in the worst case? But do shut down gain-of-function virus research which could kill millions of non-customers. And shut down AGI research that could kill everyone. https://t.co/89BRlZwWD5
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) 30 juin 2023Actually, I'd personally say it should be much easier to go sell a new drug that only injures voluntary customers in the worst case? But do shut down gain-of-function virus research which could kill millions of non-customers. And shut down AGI research that could kill everyone.
-
Testing Honesty of Smarter Entities Before Major Decisions
By
–
How do you test their honesty, in advance of the big gamble, if they're smarter than you?
-
AI Industry Leaders Issue Extinction Risk Warning Statement
By
–
#AI industry and researchers sign statement warning of ‘extinction’ risk https://
cnn.it/3C0kqBB #ethics #FutureofWork -
Following 70000 AI Accounts: Inside an AI Enthusiast’s Feed
By
–
I am the only human to follow 70,000 in AI. On vacation now but check out my like feed.