20+ @cruise driverless in Austin, TX. [Thanks to Tandy Trower & @garymarcus
] This system is not ready for deployment at scale, and it is not a good business model to be subjecting people going about their lives to disruptive experiments that piss them off.
REGULATION
-
Cruise’s Driverless Cars in Austin Not Ready for Scale Deployment
By
–
-
Concerns about artificial general intelligence development and risks
By
–
This is the whole artificial general intelligence situation but hey I'm just overreacting right?
-
Congress addresses exploitative labor practices in AI industry
By
–
Wow I never thought I’d see the day. Thrilled to see Congress taking up the issue of exploitative labor practices in the AI industry, and incredible to see the work of so many people I admire cited along with my own. https://
jayapal.house.gov/2023/09/13/rep
-jayapal-sen-markey-lead-colleagues-in-demanding-answers-from-ai-companies-on-use-of-underpaid-overworked-data-workers/
… -
TESCREAL bundle influences catastrophic risk and politics discourse
By
–
Yep the #TESCREAL bundle is running the show. H/t @xriskology "Leigh will borrow the estimate of Oxford philosopher Toby Ord in a speech on Friday exploring the intersection between catastrophic risk and extreme politics” https://
theguardian.com/australia-news
/2023/sep/22/one-in-six-chance-of-a-species-ending-event-in-next-century-labor-mp-andrew-leigh-warns
…. -
Eliezer Yudkowsky denies wanting to control ASI
By
–
Who told you the lie that I wanted to centralize ASI or control it? I want it to not exist. I cannot control it. It kills everyone regardless of who thinks they control it.
-
AI industry downplays existential risks despite doom rhetoric
By
–
I cannot think of any case in history where "our product will kill everyone you love" was a good marketing tactic, and when Nvidia, the major winner so far at $1T cap, testified before the US Senate, they pooh-poohed talk of AGI (not just doom). You have been sold quite the bill
-
Nuclear Weapons and AI: Understanding Different Threat Categories
By
–
They are of course vastly different. As Marvin Minsky observed, nuclear weapons are not really dangerous because nuclear weapons are not self-replicating. To this I would add that artificial pathogens are not really dangerous because they're not smart.
-
Decentralized AGI Development Better Than Singular Control
By
–
Assigning control over AGI development to one singular organization and barring everyone else from developing it is quite a misguided idea.
-
TESCREALism and AI: Utopia, Apocalypse, and Eugenics Concerns
By
–
Yes. I should ally with the Elmos and Altmans and Anthropics and other TESCREALists like Tegmark who cycle between selling utopia and apocalypse. Not to mention the eugenics roots of all this. https://
youtube.com/watch?v=P7XT4T
WLzJw&t=6s
… -
Tech Founders’ Influence on Policy and Wealth Concentration
By
–
And they're all the co-founders, chief X and whatever of all these companies getting gazillions and setting policy