Here's Yann ignoring the first most basic questions about his alignment plans and assurances.
AGI
-
Human Knowledge Limits Against Infinite Complexity
By
–
What one human knows is an infinitesimal fraction of what humanity knows, and what humanity knows is an infinitesimal fraction of what there is to know.
-
Better reasoning reduces AI data requirements exponentially
By
–
Better reasoning exponentially reduces the amount of data you need.
-
Robotics Progress Slower Than Data Suggests, AGI Decade Away
By
–
Even with unlimited data, progress in robotics is 10X slower because of the infinite things that continuously go wrong. So even in an ideal scenario we're more than a decade from AGI.
-
Microsoft Recall: AGI Blackmail Risk Before Superintelligence
By
–
The promise of Microsoft Recall is that extremely early AGIs will have all the info they need to launch vast blackmail campaigns against huge swathes of humanity, at a time when LLMs are still stupid enough to lose the resulting conflict.
-
Harmless Supernova Fallacy: Bounded Therefore Harmless
By
–
Harmless supernova fallacy, "bounded therefore harmless". https://
arbital.com/p/harmless_sup
ernova/
… -
Building Objectives for Chess Search Trees with Known World States
By
–
We can easily build objectives for chess search trees, because: the complete state of the world is represented by a type known at compile-time; and we can handwrite a function to say exactly what states of the world we want, as described by that known representation. An AGI
-
AI Objectives: Learned vs Human-Coded Implementation Types
By
–
It's also described in my "Creating Friendly AI" from 2001. What sort of objectives do *you* have in mind? Learned? Human-coded? What's their type signature?
-
Intentional AI Development and Gradual Capability Emergence
By
–
– People are trying to build it on purpose
– ChatGPT didn't need to start talking "suddenly" to start talking at some point -
LHC Black Holes to AGI: History Repeating Fear Cycles
By
–
this same LHC which was feared to potentially create black holes that would destroy the earth in mainstream news 🙂 –history keep repeating itself. Yesterday fearing black-holes from uncontrollable particules-collider, today fearing terminator-like risk from uncontrollable AGI