The pricing tiers for AGI are something like (1) $20/month, (2) $200/day = ~$75,000/year, (3) $1,000/day = ~$350,000/year, and (4) ~$10 billion. For now.
→ View original post on X — @ceobillionaire, 2026-04-07 23:39 UTC
Global AI News Aggregator
By
–
The pricing tiers for AGI are something like (1) $20/month, (2) $200/day = ~$75,000/year, (3) $1,000/day = ~$350,000/year, and (4) ~$10 billion. For now.
→ View original post on X — @ceobillionaire, 2026-04-07 23:39 UTC
By
–
I think we are close to the finish line. One way or the other.

By
–
Join the ARC Prize team — help us build ARC-AGI-4 and ARC-AGI-5 ARC Prize (@arcprize) Platform Engineer – Benchmark Lead ARC Prize Foundation is hiring a senior engineer to build our benchmark platform * Expand ARC-AGI-3 * Own ARC-AGI-4 * Lay the foundations for ARC-AGI-5 Come build the benchmark that defines progress toward AGI $7.5K referral bonus — https://nitter.net/arcprize/status/2041626929380626530#m
→ View original post on X — @ceobillionaire, 2026-04-07 21:51 UTC
By
–
Just got access to Claude Mythos… & ughhhhhhhhh this is AGI. It was the first time a model one shotted a 10/25G Ethernet MAC/PCS, it even knew to select the right line rate and data width for lower latency. This alone is something that would take a really skilled digital designer 3-6 months if they had experience in the past to pull off… But it didn’t just do that I then said to make the MAC fully cut through and only forward certain IP addresses within a range downstream it one shotted it instantly also which blew me away… Then finally I thought ok let me trip it up so I said now do 50G MAC and it knew without me telling it to add another GT transceiver and it even added alignment markers and FEC to it correctly. 💀💀💀 It’s passing all the tests I have so I’m going to flash the board and see if it actually works on hardware now…
→ View original post on X — @ceobillionaire, 2026-04-07 21:00 UTC
By
–
If you think about it, Anthropic essentially now has a master key to just about any software in the world. In some ways, they now have more power than governments.
By
–
This is absolutely fucking terrifying. Anthropic's rumored Mythos model is real. And it's so powerful that they can't release it to the public. We're beyond benchmarks now. This model, in the wrong hands, is a cyberweapon capable of mass destruction.
By
–
Quel est votre camp? Choisissez. Celui de la vie, ou celui de la mort?
By
–
If you had AGI would you release it to the world? I wouldn’t. I would fix the bugs in the world first. This technology in the wrong hands would harm us all. In good hands it will help all. Anthropic (@AnthropicAI) Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing — https://nitter.net/AnthropicAI/status/2041578392852517128#m
→ View original post on X — @scobleizer, 2026-04-07 20:23 UTC

By
–

So maybe OpenAI *really* figured out superintelligence. In a way, Anthropic did, right? nitter.net/kimmonismus/status/204… Chubby♨️ (@kimmonismus) Looks like OpenAI reached Superintelligence. OpenAI: "Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI." OpenAI just published a 13-page policy blueprint for the "Intelligence Age"- proposing a Public Wealth Fund, 32-hour workweek pilots, portable benefits, a formal "Right to AI," and tax reforms to offset shrinking payroll revenue as automation scales. The document frames superintelligence not as a distant scenario *but an active transition requiring New Deal-level ambition*: new safety nets, containment playbooks for dangerous models, and international coordination modeled on aviation safety institutions. Here are OpenAI's suggestions (tl;dr): Open Economy: -Give workers a formal voice in AI deployment decisions -Microgrants and "startup-in-a-box" for AI-native entrepreneurs -Treat AI access as basic infrastructure (like electricity) -Shift tax base from payroll toward capital gains and corporate income -Public Wealth Fund — every citizen gets a stake in AI growth -Fast-track energy grid expansion via public-private partnerships -32-hour workweek pilots, better benefits from productivity gains -Auto-scaling safety nets triggered by displacement metrics -Portable benefits untied from employers -Invest in care economy as a transition path for displaced workers -Distributed AI-enabled labs to accelerate scientific discovery Resilient Society: -Safety tools for cyber, bio, and large-scale risks -AI trust stack — provenance, verification, audit logs -Competitive auditing market for frontier models -Containment playbooks for dangerous released models -Frontier AI companies adopt Public Benefit Corporation structures -Codified rules and auditing for government AI use -Democratic public input on AI alignment standards -Mandatory incident and near-miss reporting -International AI safety network for joint evaluations and crisis coordination Notably, OpenAI calls for stricter controls only on a narrow set of frontier models while keeping the broader ecosystem open, a clear attempt to position regulation as targeted, not industry-wide. They're backing it with up to $100K in fellowships and $1M in API credits for policy research, plus a new DC workshop opening in May. — https://nitter.net/kimmonismus/status/2041130939175284910#m
→ View original post on X — @kimmonismus, 2026-04-07 19:04 UTC
By
–
Yeah. It is interesting to have a football stadium full of smart people to study and build things with