Pretty much inherent to how we build software on top of LLMs – mixing developer instructions with untrusted user input is an anti-pattern that's baked into how we build this stuff
LLM Security: Mixing Developer Instructions with Untrusted Input
By
–
Global AI News Aggregator
By
–
Pretty much inherent to how we build software on top of LLMs – mixing developer instructions with untrusted user input is an anti-pattern that's baked into how we build this stuff
Leave a Reply