And not just ANY AI people. Co-founder of Deepmind and their "chief AGI scientist" as is listed on the bio.
AGI
-
Confrontation with Shane Legg over paper misrepresentation claims
By
–
Well. I now have a guess for where they got this from, and Shane Legg is out here telling me I’m misrepresenting his paper. The nerve. https://
arxiv.org/pdf/0712.3329.
pdf
… -
Eliezer Yudkowsky denies wanting to control ASI
By
–
Who told you the lie that I wanted to centralize ASI or control it? I want it to not exist. I cannot control it. It kills everyone regardless of who thinks they control it.
-
Scientific Definition and Measurement of Intelligence Consensus
By
–
LOL this is the continuation of the paper which everyone can read: https://
arxiv.org/pdf/0712.3329.
pdf
…
"Although the details of the definition are debated, in broad terms a fair degree of consensus about the scientific definition of intelligence and how to measure it has been achieved." -
Personal AI Agents Could Become Your Legal Proxy
By
–
Your future personal intelligence will have your legal proxy https://
wired.com/story/plaintex
t-smarter-ai-assistants-could-make-it-harder-to-stay-human/
… -
AI industry downplays existential risks despite doom rhetoric
By
–
I cannot think of any case in history where "our product will kill everyone you love" was a good marketing tactic, and when Nvidia, the major winner so far at $1T cap, testified before the US Senate, they pooh-poohed talk of AGI (not just doom). You have been sold quite the bill
-
Defining and Measuring Intelligence: Scientific Consensus Debate
By
–
You don't have to read his mind. You can read the text and the citations, and the number of other things people pointed out in the thread. Including his claims that there's "a fair degree of consensus about the scientific definition of intelligence and how to measure it "
-
AI Agents: Future of Autonomous Goal-Achieving Systems
By
–
Imagine an AI that can understand your high-level goals and use all its tools and resources to achieve them—talking to people, to other AIs, and anything else it needs. That's the future of AI, and it's closer than you think.
-
Superintelligence Deception vs Robot Training: Adversarial Problem
By
–
Fooling a superintelligence is a more adversarial problem than training a robot. In the latter case intelligence works with you; in the former, against you. "Can learn from" and "cannot distinguish" are different orders of requirement.
-
Decentralized AGI Development Better Than Singular Control
By
–
Assigning control over AGI development to one singular organization and barring everyone else from developing it is quite a misguided idea.