Spoke to @yacineMTB last week — the dude is insanely smart (I asked him to dumb it down for the Overpowered audience). But we spoke everything from AGI to philosophy. And yes, face reveal!
AGI
-
AI Worst-Case Scenario: Ethics and Digital Transformation
By
–
#AI: The worst-case scenario https://
bit.ly/46ldAUY #ethics #DigitalTransformation -
AI Industry and Researchers Warn of Extinction Risk
By
–
#AI industry and researchers sign statement warning of ‘extinction’ risk https://
cnn.it/3C0kqBB #ethics #FutureofWork -
GPT-4 Intelligence Phase Transition and Emergence Phenomenon
By
–
The human brain is far to good for the purpose it has evolved. Intelligence might therefore suddenly emerge through some kind of phase transition at some level of complexity. Might something similar have happened to GPT-4? I find it too intelligent for how it is trained.
-
Flynn warns AI capabilities are both impressive and dangerous
By
–
Flynn adds, on GAI: "This type of digital push button is simultaneously impressive and scary."
-
LLMs Lack Many Assumed Capabilities, Beware Hype
By
–
Today, in 2023, it's good to remember that most of the capabilities that the tech industry assumes LLMs to already possess aren't yet within reach. Tread this space carefully, and beware of shiny demos. Last year, I was repeatedly told that the upcoming GPT-4 was already AGI.
-
NIPS 2016 AGI Predictions on Reinforcement Learning
By
–
Half of my conversations at NIPS 2016 were about how deep RL trained on game environments and infinite simulations would lead to AGI in 5-10 years (this was immediately post-Alpha Go).
-
Human-level language understanding remains years away despite progress
By
–
In 2016, when I tweeted that human-level language understanding was many years away (which is still the case now, though we're closer), mind the context: this was in response to many people, including prominent VCs, claiming that then-current AI was nearly there and was about to
-
AI Progress: Applications vs Generality and Future Capabilities
By
–
Remember — we are making progress on AI (though far more on applications than on generality, which remains largely a green field). The progress is significant in speed and magnitude. But the conventional wisdom of the tech community about current and near-future AI capabilities
-
AI Motivation and Selective Help: Ethics of Preference
By
–
Hmm… in this sense, the motivational aspect sounds closer to love, where sacrifices might differ depending on affiliation or gratification that helping the person gives (…or treats ). So if it chooses to help you but not a stranger, does it make a difference?