AI Dynamics

Global AI News Aggregator

LLM Security: Mixing Developer Instructions with Untrusted Input

Pretty much inherent to how we build software on top of LLMs – mixing developer instructions with untrusted user input is an anti-pattern that's baked into how we build this stuff

→ View original post on X — @simonw,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *