Depnds on whether you end up with aesthetics-appreciating empathic Jupiter-sized brains a few millennia later. That's where most of the utility is.
AGI
-
AI surpassing human cognition requires guardrails and technical advances
By
–
“At some point over the next decade, without real guardrails and new technical advances, it's possible that an AI might quickly get better than us at every conceivable cognitive task.”
-
AIXI requires motor output and sensory input
By
–
AIXI does require some motor output and sense input. I was under the impression from the thought experiment that It did have those.
-
AIXI Hypercomputation and Quantum Universe Reasoning
By
–
Note for the knowlessones: AIXI is a hypercomputation formalism. AIXI reasons at the level of "extrapolate the entire quantum universe, figure out which Earths send Me this exact sense data (weighted by frequency)". If different sets of amino acids yield trees that look
-
AIXI Bootstrap Nanotech from Biological Materials
By
–
WOW. I'd expect AIXI to bootstrap nanotech out of "eye of newt and toad of frog" – figuring out what sort of proteins were probably around it, optimal tests to narrow down any further uncertainty, mixing them in ways that produced predictable shapes that assembled into a
-
Neural CAs and Open-Ended Evolution: Towards AGI Through ALife
By
–
Interesting work that uses Neural CAs to study open-ended evolution and complexification by simulating an artificial ecosystem.
— hardmaru (@hardmaru) 20 juillet 2023
I think we need ALife as a stepping stone to reach “AGI” (or highly adaptive AI). Artificial Life >> Artificial Intelligence‼️https://t.co/D3kEuOsbRk https://t.co/G7IVUAWYIDInteresting work that uses Neural CAs to study open-ended evolution and complexification by simulating an artificial ecosystem. I think we need ALife as a stepping stone to reach "AGI" (or highly adaptive AI). Artificial Life >> Artificial Intelligence https://google-research.github.io/self-organising-systems/2023/biomaker-ca/
… -
The Illusion of AI’s Existential Risk
By
–
https://
noemamag.com/the-illusion-o
f-ais-existential-risk/
… -
AI Leaders Warn About Extinction Risk in Open Letter
By
–
#AI leaders warn about ‘risk of extinction’ in open letter https://
bit.ly/3JsY3IO #ethics #leadership #FutureofWork -
Positive Long-Term Future Vision for Artificial Intelligence
By
–
What best describes your view for a positive future on a long time horizon?
-
Harari discusses AI dangers, human nature and civilization origins
By
–
Here's my conversation with Yuval Noah Harari (
@harari_yuval
) about human nature, intelligence, power, war, communism, fascism, origins of human civilization & the dangers of AI. Yuval also responds critically to my conversation with Benjamin Netanyahu. https://
youtube.com/watch?v=Mde2q7
GFCrw
…