Who told you the lie that I wanted to centralize ASI or control it? I want it to not exist. I cannot control it. It kills everyone regardless of who thinks they control it.
AGI
-
Scientific Definition and Measurement of Intelligence Consensus
By
–
LOL this is the continuation of the paper which everyone can read: https://
arxiv.org/pdf/0712.3329.
pdf
…
"Although the details of the definition are debated, in broad terms a fair degree of consensus about the scientific definition of intelligence and how to measure it has been achieved." -
AI industry downplays existential risks despite doom rhetoric
By
–
I cannot think of any case in history where "our product will kill everyone you love" was a good marketing tactic, and when Nvidia, the major winner so far at $1T cap, testified before the US Senate, they pooh-poohed talk of AGI (not just doom). You have been sold quite the bill
-
Defining and Measuring Intelligence: Scientific Consensus Debate
By
–
You don't have to read his mind. You can read the text and the citations, and the number of other things people pointed out in the thread. Including his claims that there's "a fair degree of consensus about the scientific definition of intelligence and how to measure it "
-
Superintelligence Deception vs Robot Training: Adversarial Problem
By
–
Fooling a superintelligence is a more adversarial problem than training a robot. In the latter case intelligence works with you; in the former, against you. "Can learn from" and "cannot distinguish" are different orders of requirement.
-
Decentralized AGI Development Better Than Singular Control
By
–
Assigning control over AGI development to one singular organization and barring everyone else from developing it is quite a misguided idea.
-
Pharma Executive’s Controversial Path to AGI Leadership
By
–
LOL no wonder pharama grifter bro grifted his way into "AGI": https://t.co/pDDPgip3lJ https://t.co/I3fwSzVNFJ
— @timnitGebru (@dair-community.social/bsky.social) (@timnitGebru) 22 septembre 2023LOL no wonder pharama grifter bro grifted his way into "AGI": https://
en.wikipedia.org/wiki/Martin_Sh
kreli
… -
Reverting GPT-4 increases x-risk, not safety measure
By
–
Reverting to a pre-GPT-4 state heightens x-risk, as we'd lose our only near-AGI system for testing, learning, and potential incremental progress towards AGI. This isn’t a good advice.
-
TESCREALism and AI: Utopia, Apocalypse, and Eugenics Concerns
By
–
Yes. I should ally with the Elmos and Altmans and Anthropics and other TESCREALists like Tegmark who cycle between selling utopia and apocalypse. Not to mention the eugenics roots of all this. https://
youtube.com/watch?v=P7XT4T
WLzJw&t=6s
… -
AGI Apocalypse Narratives and Their Policy Influence
By
–
There needs to be support group for those of us seeing the ridiculous #AGI apocalypse and utopian cults steering policy, research, education (they got all the undergrads), and PR.