AI Dynamics

Global AI News Aggregator

CREATIVE AI

  • Using AI as coding agent to create educational animations easily

    Some use AI to learn. More should. Eric Xu (e/Mettā) (@xleaps) Always loved @3blue1brown's visualizations but never really conquered Manim (the animation library). With Claude as a coding agent, I can finally direct animations at a high level — no more fighting the library. So I built this: explaining to 12-year-old me why fractals have non-integer dimensions. D = log N / log r. Simple formula. Surprisingly deep rabbit hole. — 终于获得了课件自由:用 AI 可以随时讲解一些知识,比如这是给当年的我讲解为什么分形维度不是整数的一个视频。 — https://nitter.net/xleaps/status/2041177250532319368#m

    → View original post on X — @scobleizer, 2026-04-07 03:08 UTC

  • Google AI Studio Gemini 3.1 Pro Generated Creative TODO List Concept

    One-shotted this in Google AI Studio with Gemini 3.1 pro preview. Such an amazing concept. TODO's be like this. nitter.net/bruce_CQT/status/20398… Bruce Cao (@bruce_CQT) hike your todos🗻 — https://nitter.net/bruce_CQT/status/2039824576583172255#m

    → View original post on X — @saboo_shubham_, 2026-04-07 03:04 UTC

  • pneuma-skills:AI与用户实时协作的智能创意工具
    pneuma-skills:AI与用户实时协作的智能创意工具

    诸位都知道 @evermind 是 @shanda_group 系的吧,盛大下面好几个 AI Native 公司,这不来自 @TankaChat 的 bro 写了一个开源项目,觉得思路特别好,给大家科普一下。 先说一个你可能有过的体验: 你用 Claude Code 说"帮我做个网页",AI 在终端里哗哗改代码,改完了你得自己开浏览器、找到文件、刷新页面才能看效果。觉得按钮颜色不对?切回终端,打字说"第二行那个蓝色按钮改成绿色",AI 可能还不确定你说的是哪个按钮。改完再切回去刷新看。 整个过程就是:说 → 等 → 切 → 看 → 切回来 → 再说。来回跳,而且你用文字描述视觉问题本身就很不精确。 pneuma-skills 这个项目的做法是,把 AI 的工作区和你的预览区塞进同一个界面。AI 每改一行代码,你这边实时看到渲染结果。你觉得哪里不对,鼠标选中那个元素,AI 立刻知道你在说什么。 你可以理解成:你和 AI 在同一个 Google Docs 里协作,只不过 AI 负责写代码,你看到的是实时渲染出来的成品。 而且它不只能做一种东西。它内置了 8 个模式:网页设计、幻灯片、Markdown 文档、Excalidraw 手绘白板、draw.io 流程图、AI 插画,甚至还有一个"模式创建器"让你自己定义新的内容类型。 最让我觉得有意思的是两点: 一是它会记住你的审美偏好。你喜欢圆角、暗色系、大间距,用几次之后它就知道了,下次不用再说。这个偏好跨 session 持久化,换个项目也还在。 二是它有个叫 Evolution Agent 的东西,会分析你过去的操作,自动优化它自己的技能模板。意思是这个系统不是静态的,是会跟着你一起长的。 底层架构上,它抽象了三层契约,agent 后端是可插拔的,目前支持 Claude Code 和 OpenAI Codex。 我觉得这个项目指向了一个很重要的方向:AI 工具的下一个瓶颈不在模型能力,在交互界面。你和 AI 之间的沟通带宽越大,协作效率就越高。 这个项目盛大内部用了都觉得不错,推荐给诸位。 Ez Chan (@EzPandazki) 我本来觉得,做 PPT 应该早就每个人都有自己一套了。没想到这玩意儿好评度这么高~ 没什么好介绍的,反正 cc 天下无敌。如果有 cc 订阅(codex 也支持但是没测试过)的同学,评论自取吧。 — https://nitter.net/EzPandazki/status/2041184491490963550#m

    → View original post on X — @elliotchen100, 2026-04-06 23:41 UTC

  • VoxCPM 2: Open-Source Text-to-Voice Generation Revolution

    🚨 The new era of Open-Source TTS is here. @OpenBMB's VoxCPM 2 just dropped and it changes the game for voice synthesis. We are moving past fixed speaker presets to true "Concept-to-Voice" generation. Just describe the voice you want in text, and the 2B model builds it. How does it beat discrete token-based models like Qwen3-TTS? VoxCPM 2 uses a cutting-edge Diffusion-Autoregressive Continuous Representation framework. → Eliminates discrete token data loss → Preserves raw acoustic metadata → Outputs natively in 48,000Hz CD-quality audio The studio-grade expressiveness is phenomenal. I gave it a specific text prompt: "Deep booming male voice, strong resonant vocal, rhythmic hype pace." It dynamically calculates natural breathing, chest vibrations, and micro-pauses. It actually performs the text naturally. Best of all, the entire stack is fully open-source and highly developer-friendly. → Native PyTorch inference workflows → LoRA and full-parameter fine-tuning → Compatible with voxcpm-nanovllm Repo and demos links in 🧵↓

    → View original post on X — @datachaz, 2026-04-06 22:59 UTC

  • Netflix VOID: Free AI Tool Removes Objects from Videos

    if, like me, you have so many ppl to remove from your old photos, this is a banger 😉 Charly Wargnier (@DataChaz) Wow. @Netflix just dropped VOID. This AI removes objects from any video… And even corrects the physics of the scene after objects/people are removed 🤯 It's 100% free and open-source. Repo + demo links in 🧵↓ — https://nitter.net/DataChaz/status/2041045342687564031#m

    → View original post on X — @datachaz, 2026-04-06 21:52 UTC

  • Elon’s AI Expected to Generate Scripts and TV Shows by Year-End

    Yeah, and soon Elon says it will be able to read lists, create scripts, and create TV shows and other stuff from the lists. I expect that by the end of the year.

    → View original post on X — @scobleizer,

  • Stabilizing Video from Running Animals: New Petpin Pipeline Breakthrough

    Stabilizing video from a camera on a running animal turns out to be brutally hard. Traditional stabilization breaks pretty quickly. We're starting to crack it. Before → After from our latest Petpin pipeline.

    → View original post on X — @scobleizer, 2026-04-06 20:17 UTC

  • Morphic Workflows Eliminates Need for Complex AI Prompting

    Morphic just killed the "I don't know how to prompt" excuse for good. Select your assets, pick a workflow, and the output is already done before you finish your coffee. Jaynti Kanani (JD) (@jdkanani) Introducing Workflows on @morphic. You know what you want, you just don’t know how to prompt for it. That’s what Workflows solve. Storyboarding? Three clicks. UGC ads? No prompting. Color grade? In seconds. Try now: morphic.com/workflows Live with 72 workflows today. More coming soon. With Workflows, you can capture repeatable creative tasks and reuse them without starting from scratch. Just select your assets and options while running a workflow. Minimal prompts required. And no nodes, of course. There’s a workflow for everything: filmmaking, social media, animation, fashion, marketing, and some just to have fun. Tag someone who'd make something wild with this. Here are my 5 favorite workflows: — https://nitter.net/jdkanani/status/2041154028034490867#m

    → View original post on X — @aihighlight, 2026-04-06 14:59 UTC

  • Morphic Workflows: Automated Video Creation from Single Image

    Holy sh*t. Dropping one image in and watching it automatically predict 9 scenes, upscale every frame, animate them, and stitch together a compiled video  🤯 @Morphic just dropped ‘Workflows’. You bring the image. It handles everything after.
 Following closely 👀 Jaynti Kanani (JD) (@jdkanani) Introducing Workflows on @morphic. You know what you want, you just don’t know how to prompt for it. That’s what Workflows solve. Storyboarding? Three clicks. UGC ads? No prompting. Color grade? In seconds. Try now: morphic.com/workflows Live with 72 workflows today. More coming soon. With Workflows, you can capture repeatable creative tasks and reuse them without starting from scratch. Just select your assets and options while running a workflow. Minimal prompts required. And no nodes, of course. There’s a workflow for everything: filmmaking, social media, animation, fashion, marketing, and some just to have fun. Tag someone who'd make something wild with this. Here are my 5 favorite workflows: — https://nitter.net/jdkanani/status/2041154028034490867#m

    → View original post on X — @datachaz, 2026-04-06 14:54 UTC

  • One-Step AI Image Generation Framework Achieves State-of-the-Art Results
    One-Step AI Image Generation Framework Achieves State-of-the-Art Results

    What if you could generate stunning AI images in a single step, without compromising quality? Researchers from Westlake University, Chinese Academy of Sciences, and DP Technology present a breakthrough. They've introduced a new framework that simplifies the design of 'shortcut' diffusion models. This framework clarifies how to build more efficient one-step image generators by disentangling their core components. Their model achieves a new state-of-the-art FID50k of 2.85 on ImageNet-256×256 with one-step generation, and 2.53 with two steps. Remarkably, it requires NO pre-training, distillation, or curriculum learning! On the Design of One-step Diffusion via Shortcutting Flow Paths Paper: openreview.net/forum?id=k6q8…  Code: github.com/EDAPINENUT/Explic…    Project: edapinenut.github.io/explici… Our report: mp.weixin.qq.com/s/BptmtBa_O… 📬 #PapersAccepted by Jiqizhixin

    → View original post on X — @jiqizhixin, 2026-04-06 14:23 UTC