The most advanced automated surveillance & other weapons are first tested on oppressed people. Entire books have been written about, specifically, Palestinians. Like the book below And we have privileged people pontificating about "existential risks."
REGULATION
-
Influential figures push governments ban open source AI
By
–
Because a bunch of influential people think AI is too dangerous to make available to everyone and are actively lobbying governments to make open source AI illegal.
-
AIPI Distances Itself from EA Despite Repeated Scandals
By
–
Colson, the leader of AIPI, has recently tried to distance himself from EA, which faced a storm of controversy last year after one of its biggest backers, the cryptocurrency magnet Sam Bankman-Fried, was arrested on fraud charges. Nothing seems to bring them down really.
-
Billionaire-Backed Think Tanks Dominate AI Policy in Washington
By
–
Meanwhile, on this dumpster fire, "Several well-monied think tanks focusing on artificial intelligence policy have sprung up in Washington, D.C. in recent months, with most linked to the billionaire-backed effective altruism (EA) movement…"
-
USA Imposes New Restrictions on Semiconductor Chip Manufacturers
By
–
USA: Government Unveils New Restrictions Imposed on Chip Manufacturers https://actuia.com/actualite/usa-le-gouvernement-devoile-les-nouvelles-restrictions-imposees-aux-fabricants-de-puces/
… #AI #ArtificialIntelligence -
Black-box Society: Why AI Demands Transparency Over Secrecy
By
–
Nearly a decade ago, @FrankPasquale coined the phrase "black-box society" to refer to the way tech platforms were growing ever more opaque as they increased their influence. Now we accept this secrecy when, in the age of AI, we need to resist it the most. theatlantic.com/technology/a…
→ View original post on X — @strongreporter, 2023-10-20 14:24 UTC
-
Responsible AI Development Versus Surface Level Alarmism
By
–
This is true. But I see a big difference between responsible AI development and building more resilient humanity, versus, surface level alarmism.
-
AI Alarmists Push Extreme Measures: Regulation Debate Intensifies
By
–
AI alarmists are attempting to shift the Overton Window by proposing extreme measures like rolling back GPT-4, halting AI development, and even suggesting bombings of non-compliant data centers. Their intent is clear: push the boundaries in hopes of securing some form of
-
AI Regulation Risk: Preventing Development Delays Threatens Humanity
By
–
We must prevent AI risk alarmists from capturing the regulatory discussion. Artificially delayed or stopped AI development is an existential risk to humanity!
-
ChatGPT Security Risk: Dream Reveals Training Data Vulnerability
By
–
Last night, I had a chilling dream: I used ChatGPT to analyze our secret game design documents. To my utter disbelief, within that dream, these documents became the core training data for the next ChatGPT iteration. Suddenly, anyone could pry into our most guarded secrets just by