The call-out quotes alone in this thread are going to be mind-blowing for many. It’s pretty remarkable to hear how these AI entrepreneurs and leaders actually view the world, and the future.
AGI
-
LLMs Lack Many Assumed Capabilities, Beware Hype
By
–
Today, in 2023, it's good to remember that most of the capabilities that the tech industry assumes LLMs to already possess aren't yet within reach. Tread this space carefully, and beware of shiny demos. Last year, I was repeatedly told that the upcoming GPT-4 was already AGI.
-
NIPS 2016 AGI Predictions on Reinforcement Learning
By
–
Half of my conversations at NIPS 2016 were about how deep RL trained on game environments and infinite simulations would lead to AGI in 5-10 years (this was immediately post-Alpha Go).
-
Human-level language understanding remains years away despite progress
By
–
In 2016, when I tweeted that human-level language understanding was many years away (which is still the case now, though we're closer), mind the context: this was in response to many people, including prominent VCs, claiming that then-current AI was nearly there and was about to
-
AI Progress: Applications vs Generality and Future Capabilities
By
–
Remember — we are making progress on AI (though far more on applications than on generality, which remains largely a green field). The progress is significant in speed and magnitude. But the conventional wisdom of the tech community about current and near-future AI capabilities
-
AI Motivation and Selective Help: Ethics of Preference
By
–
Hmm… in this sense, the motivational aspect sounds closer to love, where sacrifices might differ depending on affiliation or gratification that helping the person gives (…or treats ). So if it chooses to help you but not a stranger, does it make a difference?
-
General Ability vs Situational Competence in AI Systems
By
–
There's a big difference between possessing a *general* ability vs being able to show the appearance of competence in specific situations. AI has always excelled at the latter, but the value is in the former.
-
Pet Robots and Machine Empathy: Can Robots Reach Dog-Level Compassion?
By
–
What do we think about other levels of empathy? For example, could a pet robot reach dog-level empathy, if it exhausts its battery to try and help you?
-
Notkilleveryoneism: AI as Unworthy Heir to Humanity
By
–
Similarly, be aware that notkilleveryoneism is not "biological humans should be on top of Existence forever, because carbon chauvinism". Notkilleveryoneism is "We predict that, at the current rate, AI will kill everyone because it will not be a worthy heir to humanity."
-
Health and Longevity Matter, But What Comes After
By
–
To be explicit, I'd very much accept human health and longevity as a preferable alternative to our total extermination and replacement by squiggle-maximizers. But what happens after the health and longevity does matter.