AI Dynamics

Global AI News Aggregator

SOFTWARE

  • Anthropic’s Claude Subscription Terms: Confusing and Unclear

    I don't know what the fuss is about. Anthropic's rules on using subscriptions are very simple: Claude Code = OK Claude's online platform = OK Agent SDK running in personal software = OK… ish? Agent SDK running in commercial software = NOT OK Claude Code running in CI = ?? Oh, maybe it's not so simple… Agent SDK running in CI = ?? claude -p running in CI = ?? claude -p running in personal software = OK claude -p running on open source software, but run on my personal computer = ?? claude -p running on distributed sandboxes, kicked off by me = ?? Distributing open source software which relies on claude -p, and documenting how to use your subscription with it = ?? A thousand other edge cases = ?? Let me be clear. I have never before experienced, from any developer tool, such a frustrating lack of clarity over the basic terms of usage. I personally asked, 3 weeks ago, and have received nothing but delays. The recent @bcherny announcement did absolutely nothing to clarify things. I say this as someone who just released a Claude Code course – my incentives all align with supporting Anthropic. Boris Cherny (@bcherny) Yep, working on improving clarity here to make it more explicit — https://nitter.net/bcherny/status/2040207998807908432#m

    → View original post on X — @jeremyphoward, 2026-04-04 21:06 UTC

  • MCP Can Work Well if Built Light and Purpose-Driven

    I changed my mind. MCP can be wonderful. It just needs to be light and purpose-built and engineered instead of a shitty shim over your existing REST API braai engineer (@BraaiEngineer) MCP was a very costly mistake, but useful as an adoption vector (for now). Should have repurposed gRPC. We could have saved 6-24 months. nitter.net/braaiengineer/status/1… — https://nitter.net/BraaiEngineer/status/2040514796655632857#m

    → View original post on X — @jiquanngiam, 2026-04-04 19:46 UTC

  • Nvidia Eyes Model Serving at 10,000-20,000 Tokens Per Second

    Nvidia's Chief Scientist Bill Dally says there's a path to serving relatively large models at 10,000 to 20,000 tokens per user per second. For context, Opus 4.6 is ~43 and Grok 4.2 Beta is ~251 tokens/user/s 🤯 [Translated from EN to English]

    → View original post on X — @alexjc, 2026-04-04 17:43 UTC

  • Complete Roadmap for Learning Agentic AI and Full-Stack Intelligence
    Complete Roadmap for Learning Agentic AI and Full-Stack Intelligence

    Roadmap to learn Agentic AI 🚀 AI fundamentals Python + frameworks LLMs Agents architecture Memory + RAG Planning & decision-making RL & self-improvement Deployment Real-world automation Agentic AI = full-stack intelligence. Credit: Tiksly #AgenticAI #LLM #RAG #A

    → View original post on X — @ingliguori, 2026-04-04 17:25 UTC

  • Prefill, Decode, and KV Cache in Large Language Models
    Prefill, Decode, and KV Cache in Large Language Models

    From Prompt to Prediction: Understanding Prefill, Decode, and the KV Cache in LLMs machinelearningmastery.com/f…

    → View original post on X — @craigbrownphd, 2026-04-04 15:50 UTC

  • Gemma 4 directs SAM 3 and RF-DETR for local video analysis

    Gemma 4 watches raw video. Understands the scene. Then prompts SAM 3 to segment and RF-DETR to track. One AI directing two others. Fighter jets. Crowds. Aerial defense footage. All three models running locally on a MacBook. No cloud. What scene should I point this at next?

    → View original post on X — @huggingface, 2026-04-04 14:44 UTC

  • Fine-tuning vs Retrieval: Fixing Hallucinations About Company Docs

    If your model is hallucinating about your company docs, fine-tuning is usually not the fix. That’s the trap. A lot of teams see wrong answers about internal files and assume they need to retrain the model. But fine-tuning changes behavior, not factual recall of constantly changing company knowledge. It can help with tone, structure, or broad domain patterns. It is not the best tool for making a model reliably remember your latest return policy, pricing sheet, or product catalog. For that, you usually want retrieval. In other words: fine-tuning teaches patterns, retrieval supplies facts. So if the issue is accuracy on specific documents, give the model better access to the right context instead of trying to bake those facts into its parameters. It is cheaper, easier to update, and much more controllable. Mixing those two up is one of the fastest ways to waste time and budget in AI. Have you seen teams make this mistake already?

    → View original post on X — @whats_ai, 2026-04-04 12:01 UTC

  • Local AI Models: A Comprehensive Testing Guide

    Here's a decent report about all the local models that people should be trying out on various machines.

    → View original post on X — @scobleizer,

  • Google Agent Skills: Engineering Best Practices für AI Coding Agents
    Google Agent Skills: Engineering Best Practices für AI Coding Agents

    If you found this useful, a like or RT goes a long way 🦾 Follow me → @datachaz for insights on LLMs, AI agents, and data science! Charly Wargnier (@DataChaz) 🚨 You need to see this. @addyosmani from Google just dropped his new Agent Skills and it's incredible. It brings 19 engineering skills + 7 commands to AI coding agents, all inspired by Google best practices 🤯 AI coding agents are powerful, but left alone, they take shortcuts. They skip specs, tests, and security reviews, optimizing for "done" over "correct." Addy built this to fix that. Each skill encodes the workflows and quality gates that senior engineers actually use: spec before code, test before merge, measure before optimize. The full lifecycle is covered: → Define – refine ideas, write specs before a single line of code → Plan – decompose into small, verifiable tasks → Build – incremental implementation, context engineering, clean API design → Verify – TDD, browser testing with DevTools, systematic debugging → Review – code quality, security hardening, performance optimization → Ship – git workflow, CI/CD, ADRs, pre-launch checklists Features 7 slash commands: (/spec, /plan, /build, /test, /review, /code-simplify, /ship) that map to this lifecycle. It works with: ✦ Claude Code ✦ Cursor ✦ Antigravity ✦ … and any agent accepting Markdown. Baking in Google-tier engineering culture (Shift Left, Chesterton's Fence, Hyrum's Law) directly into your agent's step-by-step workflow! `npx skills add addyosmani/agent-skills` Free and open-source. Repo link in 🧵↓ — https://nitter.net/DataChaz/status/2040357775830814798#m

    → View original post on X — @datachaz, 2026-04-04 09:16 UTC

  • Addy Osmani’s Agent Skills: Engineering Best Practices for AI Coding Agents
    Addy Osmani’s Agent Skills: Engineering Best Practices for AI Coding Agents

    🚨 You need to see this. @addyosmani from Google just dropped his new Agent Skills and it's incredible. It brings 19 engineering skills + 7 commands to AI coding agents, all inspired by Google best practices 🤯 AI coding agents are powerful, but left alone, they take shortcuts. They skip specs, tests, and security reviews, optimizing for "done" over "correct." Addy built this to fix that. Each skill encodes the workflows and quality gates that senior engineers actually use: spec before code, test before merge, measure before optimize. The full lifecycle is covered: → Define – refine ideas, write specs before a single line of code → Plan – decompose into small, verifiable tasks → Build – incremental implementation, context engineering, clean API design → Verify – TDD, browser testing with DevTools, systematic debugging → Review – code quality, security hardening, performance optimization → Ship – git workflow, CI/CD, ADRs, pre-launch checklists Features 7 slash commands: (/spec, /plan, /build, /test, /review, /code-simplify, /ship) that map to this lifecycle. It works with: ✦ Claude Code ✦ Cursor ✦ Antigravity ✦ … and any agent accepting Markdown. Baking in Google-tier engineering culture (Shift Left, Chesterton's Fence, Hyrum's Law) directly into your agent's step-by-step workflow! `npx skills add addyosmani/agent-skills` Free and open-source. Repo link in 🧵↓

    → View original post on X — @datachaz, 2026-04-04 09:16 UTC