#AI industry and researchers sign statement warning of ‘extinction’ risk https://
cnn.it/3C0kqBB #ethics #FutureofWork
AGI
-
AI Industry and Researchers Warn of Extinction Risk
By
–
-
GPT-4 Intelligence Phase Transition and Emergence Phenomenon
By
–
The human brain is far to good for the purpose it has evolved. Intelligence might therefore suddenly emerge through some kind of phase transition at some level of complexity. Might something similar have happened to GPT-4? I find it too intelligent for how it is trained.
-
Flynn warns AI capabilities are both impressive and dangerous
By
–
Flynn adds, on GAI: "This type of digital push button is simultaneously impressive and scary."
-
LLMs Lack Many Assumed Capabilities, Beware Hype
By
–
Today, in 2023, it's good to remember that most of the capabilities that the tech industry assumes LLMs to already possess aren't yet within reach. Tread this space carefully, and beware of shiny demos. Last year, I was repeatedly told that the upcoming GPT-4 was already AGI.
-
NIPS 2016 AGI Predictions on Reinforcement Learning
By
–
Half of my conversations at NIPS 2016 were about how deep RL trained on game environments and infinite simulations would lead to AGI in 5-10 years (this was immediately post-Alpha Go).
-
Human-level language understanding remains years away despite progress
By
–
In 2016, when I tweeted that human-level language understanding was many years away (which is still the case now, though we're closer), mind the context: this was in response to many people, including prominent VCs, claiming that then-current AI was nearly there and was about to
-
AI Progress: Applications vs Generality and Future Capabilities
By
–
Remember — we are making progress on AI (though far more on applications than on generality, which remains largely a green field). The progress is significant in speed and magnitude. But the conventional wisdom of the tech community about current and near-future AI capabilities
-
General Ability vs Situational Competence in AI Systems
By
–
There's a big difference between possessing a *general* ability vs being able to show the appearance of competence in specific situations. AI has always excelled at the latter, but the value is in the former.
-
Notkilleveryoneism: AI as Unworthy Heir to Humanity
By
–
Similarly, be aware that notkilleveryoneism is not "biological humans should be on top of Existence forever, because carbon chauvinism". Notkilleveryoneism is "We predict that, at the current rate, AI will kill everyone because it will not be a worthy heir to humanity."
-
Health and Longevity Matter, But What Comes After
By
–
To be explicit, I'd very much accept human health and longevity as a preferable alternative to our total extermination and replacement by squiggle-maximizers. But what happens after the health and longevity does matter.