Privacy is about being able to navigate my relationships in ways that allow me to live a full and authentic human life: @mer__edith #IndiaTodayConclave26 | @TEE_S_G @Akshita_N @jessica_goel @nagarjund
Brilliant session on our agentic future as @mer__edith unveils some red flags of data, law enforcement and human-rights. Privacy must be more respected than fake fear of bad-actors. @sardesairajdeep attempts to unwravel the @signalapp appeal. IYKYK
There's a toxic culture coming out of the AI industry that keeps trying to get us not to think. The message is everywhere. Don’t read the code, just vibe-code. Don’t try to understand all the text, just let AI summarize it. Don’t bother educating yourself, it’s too late. Don’t worry about the errors. Trust that everything will be fixed in the next version. The theme is the same. Don’t think too hard. Just keep swallowing the slop.
📁 Meredith Whittaker, president of Signal, warns that AI agents embedded in operating systems could undermine privacy. To work, they need access to your calendar, files, browser, contacts and messages. That creates a massive gateway into your digital life. And it can bypass the protections encrypted apps rely on.
“Take back the authority to define who we are from a handful of companies that have… naturalised their right to sort us and order us and tell us our place in the world” @mer__edith yojoflo (@yojoflo) The threat is real, it’s already here… plain speaking @mer__edith and @signalapp are north stars of true DPI. piped.video/5Wi6hse46l0?si=EZ-z… — https://nitter.net/yojoflo/status/2029819679930544572#m
"For a while, the [tech] companies saw the problems and intended to address them… But at some point they realized that the problems aren't so easily solvable. Because they are fundamental." @mer__edith zeit.de/2026/11/meredith-whi… [Translated from EN to English]
Governments worldwide are moving to restrict access to online services based on age. More than 370 scientists have written an open letter calling for a moratorium on age assessment technologies until there is solid evidence on feasibility and impact. A 🧵 csa-scientist-open-letter.or…
Much talk about increasing privacy by creating a layer of anonymity routing between user & AI model provider. But what I truly don't get is this: commodity LLMs are already exceptionally good at identifying patterns & de-anonymizing. Why should we think they aren't capable of quickly relinking us? I'd wager that 1-shot re-identification by a model is easily possible across single prompts from multiple accounts, even if they are 'anonymously' routed to the API. Almost certainly especially true when users are working on the same bit of code, project, or from the same environment. Upshot: sort of like using Tor browser thinking you're anonymous from websites, but keeping cookies across sessions. The amount of muckery you'd have to do to context to be truly hardened against relinking by large models seems truly substantial to the point of massive inefficency. John Scott-Railton (@jsrailton) A lot of privacy has turned out to be just friction… …thats quickly being automated away. E.g. random anons could rest easy knowing manually de-anonymizing them wasn't worth the cost to anybody. Not true anymore. Now it's fast & scalable, especially for governments. — https://nitter.net/jsrailton/status/2027196811736485982#m
Last year DoD gutted the team responsible for testing and validation of AI weapons systems and fired its head – leaving the internal practices of companies selling the tech the last line of defense. This isn’t a situation we should find ourselves in, ever: technologyreview.com/2025/06… Anthropic (@AnthropicAI) A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement… — https://nitter.net/AnthropicAI/status/2027150818575528261#m