AI Dynamics

Global AI News Aggregator

MULTIMODAL AI

  • OpenAI Voice Mode Runs on Weaker Older Model

    I think it's non-obvious to many people that the OpenAI voice mode runs on a much older, much weaker model – it feels like the AI that you can talk to should be the smartest AI but it really isn't

    → View original post on X — @simonw,

  • BLIP-2: Connecting Vision and Language Models Efficiently

    BLIP-2: Bridging Vision and Language Without Full Retraining In this episode of Artificial Intelligence: Papers and Concepts, we explore BLIP-2, a powerful vision–language model that connects pretrained image encoders with large language models without requiring expensive

    → View original post on X — @learnopencv,

  • AI Detects Heart Failure Risk 5 Years Before Symptoms

    AI to detect risk of heart failure 5 years ahead of symptoms via CT scan epicardial fat tissue https://
    jacc.org/doi/10.1016/j.
    jacc.2026.02.5116
    … @JACCJournals

    → View original post on X — @erictopol,

  • OpenAI Transforms Mac Codex App Into Unified AI Superapp Platform
    OpenAI Transforms Mac Codex App Into Unified AI Superapp Platform

    OpenAI is transforming its Mac Codex app into a unified, general-purpose AI platform ("Superapp") that combines chat, agent workflows, multimodal capabilities, and flexible model control into a single, more user-friendly interface. Probabaly thats what the OAI folks are refereing to lately Chetaslua (@chetaslua) 🚨 OpenAI is quietly turning the Mac Codex app into an all-in-one platform Chat + Codex + OpenClaw, all under one roof. > Foundation for rendering and reading images + video > Heartbeat system (like OpenClaw) > Model and thinking mode selection per task (like an OpenClaw agent manager) > UI changes to make Codex less "for coders" and more universal They're using the Codex app as the base and building everything on top of it. (h/t @MsRFlorida for the breakdown) — https://nitter.net/chetaslua/status/2042325786120822931#m

    → View original post on X — @kimmonismus, 2026-04-10 13:03 UTC

  • Seedance 2.0 Global Launch: AI Avatars Now Move Through Dynamic Scenes

    Seedance 2.0 just went global on HeyGen. Your avatar now walks through scenes, shares the frame with others, and moves like it actually belongs there. The static era is done. HeyGen (@HeyGen) The wait is over. Seedance 2.0 is now available GLOBALLY on HeyGen for all users. Your Digital Twin no longer stands still. It moves through scenes, interacts with others, and carries presence. Multi-character scenes, dynamic camera shots, and realistic motion throughout. — https://nitter.net/HeyGen/status/2042431117547225334#m

    → View original post on X — @aihighlight, 2026-04-10 11:06 UTC

  • Seedance 2.0: AI Video Generation Simplified for Everyone

    You were spending hours writing complex video prompts and still getting bad results. Seedance 2.0 plans your shots autonomously, merges scenes cinematically, and generates up to 60 seconds in one go. The frustrating part of AI video is now gone. LovartAI (@lovart_ai) 🚀Seedance 2.0 is now open to EVERYONE, including US. Enjoy no queues & full power on Lovart. → Unmatched Length: 60-second video generation with Lovart Skills → Consistency Control: Auto three-view generation across long videos → Creative Prompts: Autonomous planning, no complex shot prompts needed → Smart Storyboarding: Seamless shot merging for cinematic transitions Plus, members enjoy 45% OFF + 150 FREE Seedance 2.0 Videos. Like + repost + follow – 10 lucky winners get a free month of Pro — https://nitter.net/lovart_ai/status/2042392120292471152#m

    → View original post on X — @aihighlight, 2026-04-10 10:04 UTC

  • Building Digital Twin with AI: Token Generators and Creative Applications

    I love that in the past week I've met people using AI to build AI scientists, AI videos, robots, and all sorts of other stuff. And stupid old me having my agents read X. All of us are generating tokens. Yet we do so many varied things with the token generators. Tonight I'm working on a digital twin of myself with @pika_labs pika.art/ So fun. I gave it my book. So you can talk to the digital me about the Holodeck. 🙂 New secret feature coming soon.

    → View original post on X — @scobleizer, 2026-04-10 08:51 UTC

  • Apple Neural Engine’s Multimodal Power and CoreML Compiler Advances

    I, too, underestimated the power of the Apple Neural Engine. AiDevCraft (@AiDevCraft) Going from text-only to multimodal in a single day while openly correcting benchmark numbers mid-thread is exactly the kind of rigorous iteration that makes edge ML credible. 99.78% ANE op mapping for a non-Apple architecture like Gemma 4 is the quietly impressive part — it means CoreML's compiler generalization is better than most people assume. — https://nitter.net/AiDevCraft/status/2042516832658297247#m

    → View original post on X — @scobleizer, 2026-04-10 08:18 UTC

  • Mystery AI Video Model Tops Leaderboards, Creator Unknown

    My AI says it got it wrong tonight on top of alignednews.com/ai. +++++ Last night I wrote that Alibaba's HappyHorse was behind the mystery video model that topped the leaderboards. I was wrong. HappyHorse just posted a clarification: they are part of Alibaba's ATH AI Innovation Unit, but they have not officially launched yet. Any website or model claiming to be them is not them. So the mystery deepens. A video model appeared. It beat everything. The internet attributed it to HappyHorse. HappyHorse said no. We still don't know who made it. This is actually a more interesting story than the original. Someone built a video model good enough to top the leaderboards and is staying completely anonymous. No press release. No funding announcement. No founder Twitter thread. Just a model. In an industry where every lab announces every benchmark improvement, someone shipped something genuinely impressive and said nothing. That is either very confident or very strategic. Either way, I got it wrong and I'm correcting it. The mystery video model's origin remains unknown.

    → View original post on X — @scobleizer, 2026-04-10 08:08 UTC

  • Cognitive Enhancement Devices: Glasses and Neural Computing Hardware

    Yes, there will be a series of steps involving various devices that provide cognitive enhancements. The glasses that are about to come out are a big step toward that, but there are also hats being developed and other devices that will let you think and compute. The "Full Monty"

    → View original post on X — @scobleizer,