AI Dynamics

Global AI News Aggregator

RAG and Token Optimization with Expanded Context Windows

I was playing with maximizing RAG content last year, but that was when models still had tiny context lengths – 4,000 or 8,000 wasn't a lot to play with Token optimization like that is less interesting now we have 100,000+ tokens to play with even with the less expensive models

→ View original post on X — @simonw,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *