Models have worked like that for over a year now – OpenAI have a similar mechanism, it's part of how their system prompts work The problem is that it's not infallible: even with special reserved tokens to delimit instructions vs data it's still possible for models to lose track
System Prompts and Instruction Delimitation in AI Models
By
–
Leave a Reply