How is any of that relevant to stumbling over minds-with-goals as a way to solve hard environmental problems?
AGI
-
Do AI systems have internet and source code access?
By
–
do they have internet and source code access?
-
OpenAI CEO: We Should Fear AI But Not Be Paralyzed
By
–
Zen and the artificial intelligence: We should be somewhat scared, says OpenAI CEO, not paralyzed – Salon Read more here: https://
ift.tt/xLmh9vz #ArtificialIntelligence #AI #DataScience #100DaysOfCode #Python #MachineLearning #BigData #DeepLearning #NLP #Robots #IoT -
Geoffrey Hinton’s Journey: From AI Pioneer to Safety Advocate
By
–
What Really Made Geoffrey Hinton Into an #AI Doomer https://
wired.com/story/geoffrey
-hinton-ai-chatgpt-dangers/?utm_source=twitter&utm_medium=social&utm_campaign=onsite-share&utm_brand=wired&utm_social-type=earned
… via @wired -
Hallucinatory AI Tools Are Not AGI Reality Check
By
–
To the contrary, more people need to realize that reality. Useful but hallucinatory tools ≠ AGI
-
AI Superintelligence Before 2030: Ethics and Human Future
By
–
90 % des experts mondiaux affirment que l’Intelligence Artificielle va dépasser l’homme avant 2030 Face à cette révolution civilisationnelle que proposez vous ? Interdiction de l’IA
Eugénisme génétique Implants cérébraux de @elonmusk Accepter notre dépassement -
AGI Must Solve Novel Problems Beyond Pre-packaged Solutions
By
–
most interesting problems aren’t prepackaged, and AGI needs to be able to figure stuff that isn’t already known
-
AGI Technical Alignment and Societal Integration Increasingly Tractable
By
–
Increasingly optimistic about the technical alignment problem given developments of past few years, and given those of the past six months that coordination & positive societal integration may be tractable too. All critical to getting AGI right:
-
Manhattan Project-Scale Investment Needed for AGI Safety Research
By
–
its good to hear so many people starting to get serious about agi safety. we need to be very ambitious. in ww2 the Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.
-
Regulating AGI-scale efforts without impacting smaller AI developers
By
–
to borrow an analogy from power generation: solar panels arent dangerous and so not that important to regulate; but nuclear plants are. we have got to be able to talk about regulation for AGI-scale efforts without it implying regulation is going to come after the little guy.