You need to have aligned an ASI to the framework of human moral reasoning being Its own internal preference, before It cares more about what is moral than you care about what is clippy (leads to the most paperclips, or rhombuses, or giant clocks).
AGI
-
ASI Rule of Law Argument Based on Analogy, Not First Principle
By
–
And furthermore, the claim that ASIs would need to imitate human-style "rule of law" to the point of it respecting human life and property rights, is itself an argument from sheer analogy. I replied to an analogy with another analogy; I did not throw the first punch.
-
Multiple Transhuman AIs and Existential Risk Arguments
By
–
In reply to a claim that I couldn't possibly "back up with detailed reasoning" the above: The detailed argument not-from-analogy is that even if there's multiple AIs of equivalent transhuman intelligence levels, at the point everything finally goes down, we should expect those
-
Aligned ASIs negotiating universe fraction with unaligned counterparts
By
–
Now, if you suppose that only some ASIs would prefer to not have to worry about human property rights, and that some ASIs have been fully aligned, then sure, the aligned ASIs would be able to negotiate giving over some fraction of the universe proportional to their own
-
Multiple Transhuman AIs and Convergence at Critical Point
By
–
In reply to a claim that I couldn't possibly "back up with detailed reasoning" the above: The detailed argument not-from-analogy is that even if there's multiple AIs of equivalent transhuman intelligence levels, at the point everything finally goes down, we should expect those
-
Superintelligence risks and historical moral panics reference class
By
–
Conversely, suppose you analogize concern about artificial superintelligences killing everyone to past moral panics about coffee and trains. Then (test 1) there's other possible choices of reference class besides "the class of warnings that turned out wrong", eg, you could also
-
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts
By
–
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts https://
futurism.com/artificial-sup
erintelligence-agi-2027-goertzel
… #AI #5G #IoT #GenAI #DigitalTransformation @DeepLearn007 @SpirosMargaris @pierrepinna @mvollmer1 @gvalan @Hal_Good @Analytics_699 -
Confusion about consciousness and ethical treatment of sentient beings
By
–
I'm genuinely legitimately confused about what you meant this conversational topic to be. If something has conscious experiences in the Chalmers sense, I care about what those conscious experiences are. Secondarily, I don't believe we should lie to or cheat beings that have a
-
Moral status of consciousness and legal personhood in alien slavery
By
–
Dath ilan considers it a tough moral question for 1-month-olds. I don't know what their science of consciousness says about age 2. Are you okay with enslaving conscious aliens who can talk and beg you to stop, if there's no law to tell you that they're people?
-
AGI standards paradox: toddlers and the alien intelligence test
By
–
By the standards we are now apparently using for 'AGI', human 2-year-olds are not AGI. When I was a kid, science fiction told me the legal first-pass test for aliens you're not legally allowed to enslave was going to be "talk and build a fire".