AI Dynamics

Global AI News Aggregator

Security Risks of Local LLM Agents vs Web-Based Interfaces

I should clarify that the risk is highest if you're running local LLM agents (e.g. Cursor, Claude Code, etc.). If you're just talking to an LLM on a website (e.g. ChatGPT), the risk is much lower *unless* you start turning on Connectors. For example I just saw ChatGPT is adding

→ View original post on X — @karpathy,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *