AI Dynamics

Global AI News Aggregator

LLM Security: Untrusted Text and Prompt Injection Risks

I'd add another circle to that diagram for the permissions of anyone else who authored text that made it into the context – which often means untrusted external attackers since it's so easy to sneak in malicious instructions for a lot of LLM systems

→ View original post on X — @simonw,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *