1. I await you or anyone else showing their solution in enough detail that it can be analyzed. 2a. Precise and exact alignment of external outcomes is a luxurious concern to have. Let's work on "any alignment at all" first. 2b. Small amounts of internal misalignment will be
AGI
-
Following 70000 AI accounts for specific content discovery
By
–
I follow 70,000 in AI looking for stuff like this.
-
The Unsolved Problem of Defining Low Impact for Superintelligence
By
–
I don't think either of these have been falsified. We still don't know how to say "low impact" in a way that holds up to a superintelligence, and if you think ChatGPT demonstrates otherwise then you didn't understand the original problem.
-
Three realizations about AI alignment difficulty and lack of progress
By
–
Realized that alignment was necessary, then realized that alignment was hard, then realized we were not on track to make it or even come close.
-
Prioritizing Risk Regulation: Drugs, Viruses, and AGI Research
By
–
Actually, I'd personally say it should be much easier to go sell a new drug that only injures voluntary customers in the worst case? But do shut down gain-of-function virus research which could kill millions of non-customers. And shut down AGI research that could kill everyone. https://t.co/89BRlZwWD5
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) 30 juin 2023Actually, I'd personally say it should be much easier to go sell a new drug that only injures voluntary customers in the worst case? But do shut down gain-of-function virus research which could kill millions of non-customers. And shut down AGI research that could kill everyone.
-
Testing Honesty of Smarter Entities Before Major Decisions
By
–
How do you test their honesty, in advance of the big gamble, if they're smarter than you?
-
AI Industry Leaders Issue Extinction Risk Warning Statement
By
–
#AI industry and researchers sign statement warning of ‘extinction’ risk https://
cnn.it/3C0kqBB #ethics #FutureofWork -
Following 70000 AI Accounts: Inside an AI Enthusiast’s Feed
By
–
I am the only human to follow 70,000 in AI. On vacation now but check out my like feed.
-
Podcast Discussion on LLMs: Open Source, Legal, and Existential AI Risks
By
–
I’m usually not a listener of hours-long podcasts, but this one with @pmarca presents a great discussion on the future of LLMs and open-source, legal, commercial, ethical, existential “risks”, and geopolitical issues surrounding AI.