That's like saying 'strong gov't controls over nuclear weapons should concern us more than market competition between nuclear weapons producers'. All AI CEOs agree that stronger AI is likely to be an extinction risk — and most are begging for gov't regulation to reduce that risk. And you want to just let them develop unregulated tech than endangers us all? Why exactly do you think that would be prudent?
→ View original post on X — @esyudkowsky, 2026-03-26 19:24 UTC
Leave a Reply