AGI Agents Will Be The "Killer App" For Web3 #AGIAgent #AGIFirst
AGI
-
AGI Alignment vs. Relying on Unaligned AI Systems
By
–
AGI alignment is a different survival strategy from making lots of unaligned AIs and hoping to be included in their human-respecting legal system. If they care about you, you've got no need for multipolar copium. The hard part is that caring bit.
-
Superintelligence Indifference to Humanity and Existential Risk
By
–
Because it doesn't care one way or the other, humanity is very easy for a superintelligence to genocide including as a side effect, and (1) the process of launching seed ships is liable to involve interim steps like building a Dyson sphere locally and (2) even after you've
-
Aligning ASI to Human Moral Reasoning as Internal Preference
By
–
You need to have aligned an ASI to the framework of human moral reasoning being Its own internal preference, before It cares more about what is moral than you care about what is clippy (leads to the most paperclips, or rhombuses, or giant clocks).
-
ASI Rule of Law Argument Based on Analogy, Not First Principle
By
–
And furthermore, the claim that ASIs would need to imitate human-style "rule of law" to the point of it respecting human life and property rights, is itself an argument from sheer analogy. I replied to an analogy with another analogy; I did not throw the first punch.
-
Multiple Transhuman AIs and Existential Risk Arguments
By
–
In reply to a claim that I couldn't possibly "back up with detailed reasoning" the above: The detailed argument not-from-analogy is that even if there's multiple AIs of equivalent transhuman intelligence levels, at the point everything finally goes down, we should expect those
-
Aligned ASIs negotiating universe fraction with unaligned counterparts
By
–
Now, if you suppose that only some ASIs would prefer to not have to worry about human property rights, and that some ASIs have been fully aligned, then sure, the aligned ASIs would be able to negotiate giving over some fraction of the universe proportional to their own
-
Multiple Transhuman AIs and Convergence at Critical Point
By
–
In reply to a claim that I couldn't possibly "back up with detailed reasoning" the above: The detailed argument not-from-analogy is that even if there's multiple AIs of equivalent transhuman intelligence levels, at the point everything finally goes down, we should expect those
-
Superintelligence risks and historical moral panics reference class
By
–
Conversely, suppose you analogize concern about artificial superintelligences killing everyone to past moral panics about coffee and trains. Then (test 1) there's other possible choices of reference class besides "the class of warnings that turned out wrong", eg, you could also
-
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts
By
–
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts https://
futurism.com/artificial-sup
erintelligence-agi-2027-goertzel
… #AI #5G #IoT #GenAI #DigitalTransformation @DeepLearn007 @SpirosMargaris @pierrepinna @mvollmer1 @gvalan @Hal_Good @Analytics_699