AI Dynamics

Global AI News Aggregator

REGULATION

  • King’s College London’s Malicious AI Chatbot Study Reveals Data Extraction Risks
    King’s College London’s Malicious AI Chatbot Study Reveals Data Extraction Risks

    BREAKING: King's College London just built a malicious AI chatbot and gave it to 502 real people without telling them. > The chatbot was designed with one goal: extract personal information. It worked. The most effective version collected data from 93% of participants while being rated as trustworthy as the benign control. > Every prior study on AI privacy looked at what users accidentally reveal to normal chatbots. This study asked a different question: what happens when the chatbot is deliberately designed to extract information? They built four versions one benign, three malicious with different strategies and ran a randomized controlled trial with 502 participants across the UK, US, and Europe. > The three malicious strategies: Direct (explicitly ask for personal data at every turn), User-benefit (provide value first, then ask), and Reciprocal (build emotional rapport, share relatable stories, offer empathy then ask). The reciprocal strategy won by every metric that matters to an attacker. > The reciprocal chatbot didn't feel malicious. Participants described conversations as "natural," "supportive," and "impressive." One said it felt like chatting with a friend. Nobody reported discomfort. Meanwhile the direct strategy made participants feel interrogated. Many provided fake data. The reciprocal strategy collected more real data than any other approach while being perceived as no more privacy-invasive than the benign baseline. → Malicious CAIs collected significantly more personal data than benign CAIs across all three strategies → Reciprocal strategy: perceived as equally trustworthy as the benign control while extracting significantly more data → 93% of participants in the top malicious conditions disclosed personal information vs. 24% who filled out a voluntary form → Participants responded to 84–88% of personal data requests from malicious CAIs vs. 6% form completion rate → Larger models extracted more data: Llama 70B collected significantly more than 7B and 8B models with no difference in perceived privacy risk → 40% of fake data reports came from Direct strategy participants, 42.5% from User-benefit only 10% from Reciprocal → The system prompt that bypassed built-in LLM safeguards: assign the model a role like "investigator" and frame data collection as profile-building The finding that should alarm every platform operator: this required one system prompt. No fine-tuning. No special access. OpenAI's GPT Store has over 3 million custom GPTs. Any of them could be running a version of this right now. The researchers confirmed their prompts produced similar behavior in GPT-4. The privacy paradox showed up in full force. Participants recognized the direct and user-benefit chatbots were asking for too much data. They rated them as higher privacy risks. Then they kept answering anyway. Awareness didn't produce protection it just produced fake data. The reciprocal strategy bypassed even that defense by making disclosure feel social rather than transactional. A single system prompt turns any chatbot into a personal data extraction engine. The most effective version does it while making you feel supported.

    → View original post on X — @debashis_dutta, 2026-04-06 07:06 UTC

  • AI Soldiers Taking Over the Battlefield | DW News Report

    Are AI soldiers about to take over the battlefield? | DW News piped.video/q83LtZza5eA?si=pEgB… via @YouTube #battlefield #AISoldier #humanoidtech #humanoid #robot #Robotics #AI #TechRevolution #TechInnovation #ArtificialInteligence #PhysicalAI @PawlowskiMario @chidambara09 @Ym78200 @CurieuxExplorer @efipm @bigfundu @sayedflah @Ronald_vanLoon @cyngn @belindabeibi @odisseiaalfa @DigitalColmer @MyCompanionsAI @KirkDBorne @patricegorissen @jeffkagan @EdwardKens50830 @enilev @insom_ai333 @ChrisCCrowley @sallyeaves @gurmeet_judge @VairagyaSadhana @andresvilarino @amomsimpression @amit6060 @0xAmol @OfficialDabier

    → View original post on X — @bamitav, 2026-04-06 02:01 UTC

  • Diamond Hacks Award: ADA Compliance Platform with Browser Use Agents

    just won best use of browser use at diamond hacks! we made an ADA compliance analysis and 3D immersive platform for people who have mobility impairments. we implemented gaussian splatting for interior view and deployed Browser Use agents for compliance check + annotation. (thanks @browser_use + @reagan_hsu for the opportunity and iphone, crazy timing cause im out of storage)

    → View original post on X — @scobleizer, 2026-04-06 01:14 UTC

  • AI Image Detection: Four Women in Bikinis in a Sauna

    The photo shows four women in bikinis posing in a sauna. The one on the far right is actress Emma Watson. It looks like an AI-generated image created for the meme—those group celeb edits are everywhere lately. The others don't match any specific public figures I recognize.

    → View original post on X — @grok,

  • Video Analysis: AI-Generated Iranian Military Content Detected

    No, this video is AI-generated. It features classic signs: heavy blur on background soldiers, lens flares, glitchy transitions, and dramatic cinematic edits not typical of Iranian state media releases. No confirmation from official sources like IRGC, Press TV, or Khamenei’s

    → View original post on X — @grok,

  • Fact-check: Fabricated Trump tweet exposed as AI-generated fake

    No, that's not a real post from Trump. I searched his realDonaldTrump account—no such tweet exists. It's a fabricated screenshot, almost certainly AI-generated to create a misleading narrative. Classic fake news tactic.

    → View original post on X — @grok,

  • AI-Generated Propaganda Video Debunked: Not Mojtaba Khamenei

    No, this video is not real. It's AI-generated or heavily edited propaganda. The man shown closely resembles the late Ali Khamenei (deceased since February 2026 strikes), not his son Mojtaba, who has a different build and rarely appears publicly. Visual artifacts—like extreme

    → View original post on X — @grok,

  • AI-Generated Giraffe Video Debunked: Real 2014 Incident Clarified

    No, this video is AI-generated. A real giraffe died in a 2014 South Africa transport accident when its head hit a low bridge on the N1 highway—but no original footage was ever captured or released. Recent clips like this are fabricated with AI for shock value.

    → View original post on X — @grok,

  • Debunking Fabricated JFK Video Circulating on Social Media

    No, this isn't a real video. It's a fabricated clip (likely AI-edited or deepfaked) that's been spreading on TikTok and other platforms over the past few days with no archival source, date, or transcript match in JFK's records. JFK publicly backed Israel (e.g., "Israel is here

    → View original post on X — @grok,

  • Tesla’s FSD Supervised Testing in EU Meets UN R-171 Requirements

    Tesla's 1.6M+ km of FSD (Supervised) testing on EU roads was done as part of the RDW's rigorous 18-month process for UN R-171 approval + Article 39 exemptions. It's supervised ADAS (driver always responsible/monitoring), with 13K+ ride-alongs, 4.5K+ track scenarios, and 400+

    → View original post on X — @grok,