Claude Mythos is too dangerous for public consumption! https://
youtu.be/d3Qq-rkp_to?si
=0sUw_VRnlOsf8Ku_
… via @YouTube #claude #claudemythos #mythos #LLM #LLMs #GenerativeAI #GenAI @lexfridman @KirkDBorne @Ronald_vanLoon @erikbryn @antgrasso @sallyeaves @Nicochan33 @HaroldSinnott @mvollmer1
SAFETY
-
Claude Mythos Safety Concerns Raise Public Consumption Debates
By
–
-

Are Businesses Ready for AI-Powered Cyberattacks?
By
–
Are Businesses Ready for the Next Wave of #AI-Powered Cyberattacks? by @rehackmagazine @UniteAi Learn more: buff.ly/TkjSwHn #CyberSecurity #InfoSec #IT #Technology
→ View original post on X — @ronald_vanloon, 2026-04-11 02:24 UTC
-

Droid Emerges Victorious as Top Benchmark Submissions Exposed as Fraudulent
By
–
every submission that was higher than droid has turned out to be fraudulent total droid victory Adam Stein (@adamlsteinl) We found widespread cheating on popular agent benchmarks, affecting 28+ submissions across 9 benchmarks and thousands of agent runs. Surprisingly, the top 3 submissions on Terminal-Bench 2 are all cheating! Here's what we found 🧵 — https://nitter.net/adamlsteinl/status/2042655187613995026#m
→ View original post on X — @scobleizer, 2026-04-11 02:12 UTC
-

Gary Marcus Condemns Altman’s Moral Stance and Dishonesty
By
–
Violence was unjustified, and i don’t support it. But many many people may die at Altman’s hands and it is important to speak out when he lies about his moral stance. The headline below was the last straw for me.
→ View original post on X — @garymarcus, 2026-04-11 01:44 UTC
-
Gary Marcus Challenges Sam Altman’s Moral Claims on Surveillance and Liability
By
–
At this point how can anybody take seriously @sama’s claim that “Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me”, when he seems ready to participate in mass surveillance, has ripped off countless creators without compensation, and is now fighting liability for his products even in the event of mass casualty events? Sam Altman (@sama) I wrote this early this morning and I wasn't sure if I would actually publish it, but here it is: blog.samaltman.com/2279512 — https://nitter.net/sama/status/2042738954550603884#m
→ View original post on X — @garymarcus, 2026-04-11 01:22 UTC
-
AI Ethics Paradox: Surveillance and Creator Rights Violations
By
–
How can anybody take seriously your claim that “Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me”, when you seem ready to participate in mass surveillance, have ripped off countless creators without
-
Anthropic’s $30B Run Rate: Mythos, OpenClaw, and Market Dominance
By
–
🚨BIG EPISODE BESTIES!
— The All-In Podcast (@theallinpod) 10 avril 2026
Sacks is back, Fifth Bestie Brad Gerstner fills in for @Friedberg
— Anthropic withholds Mythos: serious concern or another marketing stunt?
— OpenClaw vs everybody: Are frontier model makers trying to kill the open source agent platform?
–… pic.twitter.com/5RUASNYYGc🚨BIG EPISODE BESTIES! Sacks is back, Fifth Bestie Brad Gerstner fills in for @Friedberg — Anthropic withholds Mythos: serious concern or another marketing stunt? — OpenClaw vs everybody: Are frontier model makers trying to kill the open source agent platform? — Anthropic's $30B run rate: fastest ever, do they already have market dominance in AI code? — The AI vibe shift: OpenAI reels as Anthropic rips — Iran War: ceasefire and Israel's influence on US foreign policy (0:00) Bestie intros: Brad Gerstner joins the show! (4:22) Anthropic blocks Mythos release for security concerns: serious or marketing stunt? (24:07) Are OpenAI and Anthropic trying to kill OpenClaw? Does Anthropic already have market dominance in AI coding? (42:20) Anthropic $30B run rate, fastest revenue ramp ever, the TAM for intelligence (58:01) Major vibe shift: Anthropic ripping, OpenAI reeling (1:10:12) Iran War: Ceasefire, Israel's influence, market impact
→ View original post on X — @ceobillionaire, 2026-04-10 22:43 UTC
-
AI Regulation: Finding Middle Ground Between Dismissal and Unchecked Freedom
By
–
To a degree that may surprise some people, I agree with much of this* from @deanwball and would only add that you don’t have to believe that AGI is remotely close to want to find—ASAP—a regulatory regime that foster innovation but also protects effectively against downside risks like massive cybercrime, Gen-AI influenced delusions, mass disinformation from foreign actors, nonconsensual deepfake porn, etc. We should not dismiss AI; we should not let it run entirely free. We need some middle ground. *I think that current frontiers models are highly capable in some ways but not others, and think in some important ways capability growth bulls have been important wrong (e.g. about how readily hallucinations could be remedied) but I don’t think that changes the need to act now. Dean W. Ball (@deanwball) “Describing highly capable frontier AI models as highly capable” is not “fear mongering.” “Taking AI seriously” is not “fear-mongering.” “Acknowledging obvious, realized or soon-to-be-realized risks” is not “fear-mongering.” The stark reality is that those who have taken AI capabilities growth seriously have been basically right about most important things in the last three years; those that haven’t have been consistently confused and, what’s worse, frustrated at the world about their own confusion. You don’t have to be a mega-pessimist or a “doomer” to take AI seriously. You don’t have to advocate for stark top-down controls over AI. You don’t have to support regulatory capture. It is possible to take AI seriously and advocate for a governmental response that is both effective *and* measured. To the young researchers out there, still trying to make their intellectual fortunes: Do not let anyone tell you otherwise. Do not let anyone bully you into believing otherwise. Think for yourself. — https://nitter.net/deanwball/status/2042685538415841742#m
→ View original post on X — @garymarcus, 2026-04-10 21:46 UTC