Repent and convert to effective altruism, for the AI Antichrist is coming.
SAFETY
-
Diminishing Returns Block AI Singularity Achievement
By
–
Key point: Diminishing returns from self-improvement => No singularity
-
AI-Powered SecurOS UVSS Detects Explosives Under Vehicles in Seconds
By
–
#AI-Powered SecurOS UVSS Detects Explosives Under Vehicles in Just 3 Seconds
— Ronald van Loon (@Ronald_vanLoon) 11 avril 2026
by @_fluxfeeds
#EmergingTech #Technology #Innovation pic.twitter.com/eropM6SHLD#AI-Powered SecurOS UVSS Detects Explosives Under Vehicles in Just 3 Seconds by @_fluxfeeds #EmergingTech #Technology #Innovation
→ View original post on X — @ronald_vanloon, 2026-04-11 18:19 UTC
-
AI System Detects Explosives Under Vehicles in 3 Seconds
By
–
#AI-Powered SecurOS UVSS Detects Explosives Under Vehicles in Just 3 Seconds
— Ronald van Loon (@Ronald_vanLoon) 11 avril 2026
by @_fluxfeeds
#EmergingTech #Technology #Innovation pic.twitter.com/p2dr49gMm2AI-Powered SecurOS UVSS Detects Explosives Under Vehicles in Just 3 Seconds by @_fluxfeeds #EmergingTech #Technology #Innovation [Translated from EN to English]
→ View original post on X — @ronald_vanloon, 2026-04-11 18:19 UTC
-
Models Echo Human Behavior Rather Than Exhibiting AI Instincts
By
–
Interesting follow up to the peer-preservation study. It makes sense to me that models are simply echoing human behavior rather than exhibiting some kind of pro-AI vs human instincts.
-
What Would a Superintelligence Do With Humanity in 2035?
By
–
On est en 2035 Vous êtes la Super Intelligence Artificielle Que faites vous des humains ?
-
Human drivers feel unsafe after using autonomous Waymo vehicles
By
–
Been taking Waymos all week. Now in a Lyft. Sitting in the back of a human-driven car is both terrifying and nauseating. There’s no way we’re gonna be allowed to do this much longer.
→ View original post on X — @scobleizer, 2026-04-11 15:22 UTC
-
Tabletop Drills and Live Simulations for Engineering Teams
By
–
Run tabletop drills and live simulations involving engineering, operations, maintenance, and management teams together.
-
Immutable Industrial Control System Backups and Restore Testing
By
–
Immutable backups of PLC programs, HMI configurations, and historian databases stored offline in separate physical locations.
— Lucian Fogoros (@fogoros) 11 avril 2026
Incorporate restore drills to verify fresh devices can be configured from backup.
Your backup is worthless if you can't restore under pressure. pic.twitter.com/puBdOnQFQjImmutable backups of PLC programs, HMI configurations, and historian databases stored offline in separate physical locations.
Incorporate restore drills to verify fresh devices can be configured from backup.
Your backup is worthless if you can't restore under pressure. -

Boycott ChatGPT, not violence against Sam Altman
By
–
Violence is not the answer. Boycott is the answer. This one’s easy. There is ample evidence that Altman is a dishonest person with inordinate power that we should not trust. But the way forward to is get the board to remove him, not to throw bombs at his house. Humanity must take the high road. The way to take the high road is to stop using his products, in protest of his implied openness to mass surveillance, his mass IP theft, and his company’s opposition to liability for their actions. When people stop using ChatGPT, Altman will have to go; it’s simple as that. Quite possibly the COO or CFO will step in, and we (and OpenAI itself) will all be better off. Dean W. Ball (@deanwball) The guy who allegedly threw a Molotov cocktail through Sam Altman’s window seems to have been an adherent to pause/stop AI. I am entirely unsurprised and have been warning about this for a long time now. I am fine with people advocating for their preferred policies—if that includes a “pause” on AI development, so be it, even if I disagree strongly. But the obvious reality is that the rhetoric of this community—which to be *extremely clear*, is a very small and non-representative subset of the AI safety community—is closer to ecoterrorism than it is to a more typical activist policy effort. Every time I have written about existential risk in recent months, I have been called a mass murderer. People with ⏹️ and ⏸️ in their handles confidently tell me that I am murdering my own baby boy and every other child on the planet. Another prominent one of these people has called me a traitor to America. I only use my own examples because I know them; this rhetoric is representative of how this fringe of the AI safety world communicates with everyone. The rhetoric of the pause/stop crowd is out of control and it has gotten worse with time. This rhetoric always had the potential to cause violence and now this seems to be no longer hypothetical. — https://nitter.net/deanwball/status/2042782724440612952#m
→ View original post on X — @garymarcus, 2026-04-11 14:31 UTC