AI Dynamics

Global AI News Aggregator

@bobgourley

  • AGI Is Here, But Distribution Remains Unequal Globally

    AGI Is Here, It Is Just Not Evenly Distributed clawstreetjournal.github.io/…

    → View original post on X — @bobgourley, 2026-04-06 06:46 UTC

  • Google Gemma 4 Release Cancels Evening Plans
    Google Gemma 4 Release Cancels Evening Plans

    Ok there goes my plans for this evening. Google Gemma (@googlegemma) Meet Gemma 4! Purpose-built for advanced reasoning and agentic workflows on the hardware you own, and released under an Apache 2.0 license. We listened to invaluable community feedback in developing these models. Here is what makes Gemma 4 our most capable open models yet: 👇 — https://nitter.net/googlegemma/status/2039736504822763534#m

    → View original post on X — @bobgourley, 2026-04-02 21:42 UTC

  • AetheroSpace Launches Phobos Satellite for Orbital Intelligence
    AetheroSpace Launches Phobos Satellite for Orbital Intelligence

    Congrats to our friends @AetheroSpace on their (literal) launch! 🚀🛰️ Edward (@somefoundersalt) Proud to announce that Phobos, the second @AetheroSpace satellite, was successfully launched to orbit earlier this morning We’re partnered with @BoozAllen on this mission to demonstrate capabilities for adaptive event detection using high-fidelity Earth observation data collected on orbit This will enable satellites to achieve faster, smarter, decision-making on orbit, and is a major step towards building the orbital intelligence layer for defense needs! — https://nitter.net/somefoundersalt/status/2038788178002522343#m

    → View original post on X — @bobgourley, 2026-03-31 05:05 UTC

  • Aethero’s Phobos satellite launches with adaptive event detection
    Aethero’s Phobos satellite launches with adaptive event detection

    Proud to announce that Phobos, the second @AetheroSpace satellite, was successfully launched to orbit earlier this morning We’re partnered with @BoozAllen on this mission to demonstrate capabilities for adaptive event detection using high-fidelity Earth observation data collected on orbit This will enable satellites to achieve faster, smarter, decision-making on orbit, and is a major step towards building the orbital intelligence layer for defense needs!

    → View original post on X — @bobgourley, 2026-03-31 01:19 UTC

  • Space Computing: Aethero Launches Second Satellite with Inference Capabilities
    Space Computing: Aethero Launches Second Satellite with Inference Capabilities

    Space needs inference and high end compute. Edward (@somefoundersalt) It’s less than 24 hours before we launch the second @AetheroSpace satellite to orbit on SpaceX Transporter-16 The satellite will demonstrate our compute service offering by hosting multiple customer software payloads within onboard containers Onwards and upwards! — https://nitter.net/somefoundersalt/status/2038358052844286336#m

    → View original post on X — @bobgourley, 2026-03-30 01:25 UTC

  • LiteLLM Supply Chain Attack Compromises Millions of AI Credentials
    LiteLLM Supply Chain Attack Compromises Millions of AI Credentials

    Someone just poisoned the Python package that manages AI API keys for NASA, Netflix, Stripe, and NVIDIA.. 97 million downloads a month.. and a simple pip install was enough to steal everything on your machine. The attacker picked the one package whose entire job is holding every AI credential in the organization in one place. OpenAI keys, Anthropic keys, Google keys, Amazon keys… all routed through one proxy. All compromised at once. The poisoned version was published straight to PyPI.. no code on GitHub.. no release tag.. no review. Just a file that Python runs automatically on startup. You didn’t need to import it. You didn’t need to call it. The malware fired the second the package existed on your machine. The attacker vibe coded it… the malware was so sloppy it crashed computers.. used so much RAM a developer noticed their machine dying and investigated. They found LiteLLM had been pulled in through a Cursor MCP plugin they didn’t even know they had. That crash is the only reason thousands of companies aren’t fully exfiltrated right now. If the code had been cleaner nobody notices for weeks. Maybe months. The attack chain is the part that gets worse every sentence. TeamPCP compromised Trivy first. A security scanning tool. On March 19. LiteLLM used Trivy in its own CI pipeline… so the credentials stolen from the SECURITY product were used to hijack the AI product that holds all your other credentials. Then they hit GitHub Actions. Then Docker Hub. Then npm. Then Open VSX. Five package ecosystems in two weeks. Each breach giving them the credentials to unlock the next one. The payload was three stages.. harvest every SSH key, cloud token, Kubernetes secret, crypto wallet, and .env file on the machine.. deploy privileged containers across every node in the cluster.. install a persistent backdoor waiting for new instructions. TeamPCP posted on Telegram after: “Many of your favourite security tools and open-source projects will be targeted in the months to come.. stay tuned.” Every AI agent, copilot, and internal tool your company shipped this year runs on hundreds of packages exactly like this one… nobody chose to install LiteLLM on that developer’s machine. It came in as a dependency of a dependency of a plugin. One compromised maintainer account turned the entire trust chain into a credential harvesting operation across thousands of production environments in hours. The companies deploying AI the fastest right now have the least visibility into what’s underneath it. Andrej Karpathy (@karpathy) Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery – Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible. — https://nitter.net/karpathy/status/2036487306585268612#m

    → View original post on X — @bobgourley, 2026-03-25 03:56 UTC

  • LeWorldModel: LeCun’s breakthrough in stable world model training
    LeWorldModel: LeCun’s breakthrough in stable world model training

    🚨 Holy shit… LeCun's team just cracked world models wide open. Everyone's obsessing over the next Claude update. Meanwhile Yann LeCun quietly dropped a paper that could matter way more long term. It's called LeWorldModel. And to understand why it's a big deal, you need to understand the difference between what LLM does and what this does. LLMs predict the next word. That's it. They're incredibly good at language. But they don't understand reality. They can write about a ball bouncing off a wall. They can't predict where it lands. World models predict what happens next in the physical world. Objects moving, colliding, falling. That's the foundation for robots that plan, self-driving cars that simulate scenarios, any AI that needs to act in reality instead of just talk about it. The problem? World models kept collapsing. The model would cheat by mapping every input to the same output. Like a weather app that predicts "sunny" every single day. Technically it's predicting. It's just useless. And fixing this required 6+ loss hyperparameters, frozen pre-trained encoders, stop-gradient hacks, exponential moving averages. A house of cards just to keep the thing from breaking. LeCun's team (Mila, NYU, Samsung SAIL, Brown) threw all of that out. LeWorldModel uses just 2 loss terms. A prediction loss and a regularizer called SIGReg that forces representations to stay diverse instead of collapsing into garbage. 6 hyperparameters reduced to 1. The simplicity IS the breakthrough. The numbers: 15M parameters. Trains on a single GPU in a few hours. Plans up to 48x faster than foundation-model-based world models. Uses roughly 200x fewer tokens than alternatives. Competitive across 2D and 3D control tasks. This isn't a supercomputer experiment. You could run this on your own hardware. LeCun has been pushing JEPA as the architecture for real AI since 2022. The criticism was always the same: "sounds nice, doesn't train stably." LeWorldModel just removed that objection. Small model. Stable training. No hacks. No frozen encoders. No collapse. Two AI futures are competing right now. Path 1: bigger LLMs, more text, more compute. Path 2: world models that learn physics from raw pixels and plan in real time. LeWorldModel is the strongest signal yet that Path 2 is real, getting cheaper, and closing in fast.

    → View original post on X — @bobgourley, 2026-03-24 20:54 UTC

  • Gill Verdon’s Peer-Reviewed Research in Applied Physics Published
    Gill Verdon’s Peer-Reviewed Research in Applied Physics Published

    Has been awesome watching @GillVerd through the last couple of years. He says what he is going to do and then does what he said he would do. And super awesome seeing the peer-reviewed research published in a prestigious applied-physics journal. Extropic (@extropic) If you are attending @APSphysics March meeting, come learn more about thermodynamic computing! Our work on taming non-equilibrium thermal electron fluctuations in silicon is now accepted in Physical Review Applied. Read more here: journals.aps.org/prapplied/a… — https://nitter.net/extropic/status/2034792787682738624#m

    → View original post on X — @bobgourley, 2026-03-23 05:02 UTC

  • Interactive AI Future Scenarios Dashboard by Bob Gourley
    Interactive AI Future Scenarios Dashboard by Bob Gourley

    Are you tired of just listening to others tell you what AI will bring us in the future? The loudest voices are the doomers, who seem to think our only choice is to all die sooner or, if we do things their way, maybe we can all die later. What if there were a way for you to run your own scenarios based on your own inputs? I built a dashboard that will let you do just that. For me, every scenario I run says the future is going to be bright, but some choices make it brighter than others. But try it yourself and form your own opinions. open.substack.com/pub/bobgou…

    → View original post on X — @bobgourley, 2026-03-21 17:14 UTC

  • Palantir Launches American Tech Fellowship-Mobilize Program
    Palantir Launches American Tech Fellowship-Mobilize Program

    Ever since my book Mobilize launched earlier this week, I’ve been flooded with messages from people asking how they can help save the American industrial base. Now we’re launching a new fellowship to connect patriots to the movement: Are you a veteran with an active security clearance looking for a new mission? A cleared, tech-savvy civilian who wants to do more than ride a desk? Palantir wants YOU for the American Tech Fellowship-Mobilize. We started ATF last year to identify and train elite American talent to revitalize our country.  Now we’re launching a new, accelerated ATF cohort (ATF-Mobilize) to teach America’s cleared workforce how to wield industry-leading software to reboot the defense industrial base. ATF-Mobilize fellows will participate in eight weeks of live, virtual training on Palantir Foundry and AIP, guided by domain experts from Palantir and our partner, Ontologize. They will learn-by-doing, building custom tools solo and with their peers. Graduates will gain mission-critical skills and access to a growing alumni network. They will also be considered for jobs at Palantir and our customers supporting urgent missions across the defense industrial base. ATF-Mobilize is your chance to deploy from your couch to a job in the engine rooms of American power. We want the best of the best. We want heretical heroes. Don’t let this opportunity pass you by. Applications are live now. Training begins April 28.  Mobilize is a movement. Move out with us: mobilizebook.com/#recruiting

    → View original post on X — @bobgourley, 2026-03-20 19:00 UTC