AI Dynamics

Global AI News Aggregator

Deep dive into tokenization vulnerabilities across multiple language models

Nice new read on tokenization!
You've heard about the SolidGoldMagikarp token, which breaks GPT-2 because it was present in the training set of the Tokenizer, but not the LLM later. This paper digs in in a lot more depth and detail, on a lot more models, discovering a less

→ View original post on X — @karpathy,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *