Seems like you watched this presentation by Max Tegmark. I'm glad preventing the "cosmocalypse" 10^11 years from now & can only be stopped by AI is the priority of the governing bodies , & simultaneously stopping the "AI" posing an existential risk.
AI Existential Risk vs Cosmocalypse Prevention: Governance Dilemma
By
–
Leave a Reply