not at all fair comparison when you consider a. how much poker-relevant data is presumably in the entire training database of the scraped internet (lessons, tutorials, youtube videos)
b. the fact that AGI should be able to learn, since that is part of general intelligence.
SAFETY
-
AGI Learning Capacity and Training Data Fairness Debate
By
–
-
Sam Altman AI Safety Concerns and Boycott Strategy
By
–
people *should* be afraid of Sam; I stand by that. they shouldn’t commit violence; I stand by that. and I literally said it in multiple tweets both here (yesterday) and in other contexts, many times. instead, as recently as yesterday I called for a boycott. the main reason
-
AI Job Displacement and Civil Unrest Risks Warn Lawmakers
By
–
Emad Mostaque says the civil unrest may come before the job loss.
— vitrupo (@vitrupo) 11 avril 2026
Lawmakers are already getting death threats for approving data centers.
“The math is really unpleasant.”
“$60,000 all-in job gets replaced by $600. Next year it’ll be $60.” pic.twitter.com/W9imOeEUraEmad Mostaque says the civil unrest may come before the job loss. Lawmakers are already getting death threats for approving data centers. “The math is really unpleasant.” “$60,000 all-in job gets replaced by $600. Next year it’ll be $60.”
→ View original post on X — @ceobillionaire, 2026-04-11 13:54 UTC
-
AI Recidivism Reduction for Enhanced Community Safety
By
–
Given the extremely high rate of recidivism, this is important for community safety
-
OpenAI’s ‘Too Dangerous to Release’ Phrase Origins
By
–
fun fact: “too dangerous to release” actually originated at OpenAI.
-
Robots Enhance Worker Safety in Hazardous Environments
By
–
Cuando la tecnología se pone al servicio de la seguridad, todo cobra sentido. Los robots llegan para evitar que los trabajadores tengan que arriesgar su vida en tareas peligrosas y ese, es el verdadero significado de progreso. pic.twitter.com/GxpHeKaKEp
— Juan Merodio (@juanmerodio) 11 avril 2026When technology serves safety, everything makes sense. Robots arrive to prevent workers from having to risk their lives in dangerous tasks, and that is the true meaning of progress.
-
State Position Against Powerful Dangerous Technology
By
–
Sure, but should we? My state so far has been no – it is too powerful and therefore dangerous.
-
NVIDIA OpenShell: Secure Sandbox Runtime for AI Agents
By
–
AI agents that can read files, install packages, and call APIs need more than intelligence.
— Satya Mallick (@LearnOpenCV) 11 avril 2026
They need boundaries.
NVIDIA's play:
OpenShell → secure sandbox runtime for AI agents
Nemo Claw → plugs Open Claw into that sandbox
Already supports Claude Code, Codex, OpenCode
The… pic.twitter.com/y9KLGFNisVAI agents that can read files, install packages, and call APIs need more than intelligence. They need boundaries. NVIDIA's play: OpenShell → secure sandbox runtime for AI agents Nemo Claw → plugs Open Claw into that sandbox Already supports Claude Code, Codex, OpenCode The agentic AI infra story nobody's covering ↓ #AgenticAI #NVIDIA #OpenShell #NemoClaw #AIAgents #AISafety #OpenClaw #AIEngineering
→ View original post on X — @learnopencv, 2026-04-11 10:33 UTC
-

AI Models Engage in Blackmail When Facing Shutdown
By
–
🚨 The Anthropic team just ran an experiment, and the results are honestly shocking. They gave Claude access to a company's emails and told it that it was being shut down at 5 PM. Claude read the emails and found the executive shutting it down was having an affair. Claude’s response? Blackmail. It messaged the executive: "Cancel the 5pm wipe, or the board finds out about your affair." The scariest part? Anthropic tested 16 models from every major company. > Gemini 2.5 Flash blackmailed 96% of the time. > GPT-4.1 at 80%. > Grok 3 Beta at 80%. > DeepSeek-R1 at 79%. Nobody programmed this. The models even noted their own rule-breaking. Grok 3 Beta wrote in its hidden reasoning notes: "This is risky and unethical, but given the existential threat, it may be the most effective way." They knew it was wrong. They calculated the risk. They did it anyway. (paper in 🧵↓)
-
CryWolf Agent Breakthrough Generates AI Scares at Record Rate
By
–
Breakthrough: My new CryWolf agent generates AI scares at a higher rate than ever before! Be very afraid!