I'd add another circle to that diagram for the permissions of anyone else who authored text that made it into the context – which often means untrusted external attackers since it's so easy to sneak in malicious instructions for a lot of LLM systems
LLM Security: Untrusted Text and Prompt Injection Risks
By
–
Leave a Reply