(Unlike nuclear power)..”there exists no scientific basis or evidence for how or when AGI will emerge (if ever), leaving us with only a hypothetical risk that has capitulated many of our regulatory efforts today. What we are seeing from current AI leaders on their comparison to
REGULATION
-
AI Labs Must Complete Nuclear Safety Analogy Seriously
By
–
"If AI labs are to consistently invoke exaggerated fears through comparisons to nuclear hazards, then they must be willing to take the analogy to completion. Ironically, if they were to explore the readily available safety mechanisms for nuclear components,…
-
AI’s Role in Society: Generative AI Panel Discussion
By
–
Our last panel of the day focuses on envisioning AI’s place in society. Join @RishiBommasani
, @erikbryn
, @microsoft
’s Jaron Lanier, @joon_s_pk
, & @MIT
’s Ashia Wilson for a wide-ranging discussion at the HAI Fall Conference on new horizons in generative AI: https://
stanford.io/46NtzuT -
International Panel on AI Safety Proposed by Tech Leaders
By
–
Tech companies moved too late with social media. this time around, we need to get ahead. @ericschmidt and I are proposing an International Panel on AI Safety. Thanks for having me on to discuss, @SquawkCNBC @andrewrsorkin
-
Governance frameworks for AI agents: lessons from human entities
By
–
We already handle this with humans, groups, corporations, and governments.
-
Aligning AI with Common Good Easier Than Nature Modification
By
–
It is much, much easier to align AI with the Common Good, than it is for children and animals.
the reason is that we can't "hack" human nature. We can only modify it through education.
We can "hack" animal nature through selective breeding (which is pretty brutal).
We can -
Superintelligent Sociopaths: Existential Risk Assessment
By
–
– The proportion of sociopaths in society is way larger than 1 in a few million.
– If society is robust to sociopaths, why can't it be robust to artificial sociopaths?
– What makes you think that we will actually build super-intelligent sociopaths?
– We have super-intelligent -
Partnership on AI Publishes Safe Foundation Model Deployment Guidance
By
–
The Partnership on AI is publishing guidance for safe foundation model deployment. There is a request for comments on the current version https://
partnershiponai.org/modeldeploymen
t/
… -
AI Education Risks: Baby Hitlers in Nuclear Physics Classes
By
–
I made that point during the Munk Debate in response to Max Tegmark (who is a physics professor at MIT): "Why aren't you worried that some students in your nuclear physics class could be baby Hitlers?"
-
Open AI Research and Debunking AI Doomsday Prophecies
By
–
An interview of me in the Financial Times in which I explain the reasons for supporting open research in AI and open source AI platforms.
I also explain why the widely-publicized prophecies of doom-by-AI are misguided and, in any case, highly premature.