Nothing new here.
Religious and political dogmas have been brainwashing people into doing bad things for millennia. It will be much easier to stop superhuman AI from brainwashing people than it's been to stop conventional brainwashing.
In fact, AI might help.
AGI
-
AI Against Brainwashing: Superintelligence as Solution
By
–
-
Global Leaders Warn AI Could Cause Catastrophic Harm
By
–
Global Leaders Warn #AI Could Cause ‘Catastrophic’ Harm https://
nytimes.com/2023/11/01/wor
ld/europe/uk-ai-summit-sunak.html?smid=nytcore-android-share
… -
Open Source AI Platforms Against Catastrophe Risk Claims
By
–
An article about my vociferous support of open source AI platforms.
Demis Hassabis, Dario Amodei, and Sam Altman (among others) have scared governments about what they claim are risks of AI-fueled catastrophes. I know that Demis, at least, is sincere in his claims, but I think -
AI Extinction Risk: Human Agency and Potential Salvation
By
–
My estimate is:
"considerably less than most other potential causes of human extinction"
Because we have agency in this. It's not like some sort of natural phenomenon that we can't stop. Conversely, AI could actually save humanity from extinction.
What is your estimate for that -
UK AI Safety Summit Addresses Existential Risks and International Coordination
By
–
The UK's big AI safety summit takes place tomorrow. It's a two-day event that's set to be jampacked with lots of chatter on the existential risks surrounding AI – and finding some international coordination on the technology (no mean feat with both the US and China attending).
-
Ilya Sutskever shares OpenAI hopes and fears for AI future
By
–
Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI https://
bit.ly/3MpG07U #AI #MachineLearning #DeepLearning #LLMs #DataScience -
Kantian Ethics and AI: Non-harm and Human Dignity Principles
By
–
For example, Kantianists argue that we should consider rules such as "you should not kill", and that we shouldn't use people as a means to an end.
-
Unruggable AGI Subdomains: Mint Your YourName.AGI.Eth
By
–
[ U N R U G G A B L E S U B N A M E S ] AGI.Eth “World's Most Coveted #AGI Web3 Asset” . . . YourName.AGI.Eth Mint : 0.06 Eth Example : .agi.eth : https://
opensea.io/assets/ethereu
m/0xd4416b13d2b3a9abae7acd5d6c2bbdbe25686401/90918927966967027843990033083714851453214269239295859044683801445586923082766
… #AGIFirst #ENS #ensdomains -
AI Model Size Regulation and Training Compute Requirements
By
–
Regulation starts at roughly two orders of magnitude larger than a ~70B Transformer trained on 2T tokens — which is ~5e24. Note: increasing the size of the dataset OR the size of the transformer increases training flops. The (rumored) size of GPT-4 is regulated.
-
Current AI limitations require fundamental innovations for AGI
By
–
Perhaps there is a watefall, but we are not on that river right now.
The river we are on leads to a pond with no naviguable exit.
A dead end.
To get to animal-level AI, we need to invent the helicopter.
We can't describe a safe helicopter because we can't describe a helicopter.