AI Dynamics

Global AI News Aggregator

Evolution of LLM Context Windows and RAG Technology

In the early days of LLMs, context windows, which is what we send them as text, were small, often capped at just 4,000 tokens (or 3,000 words), making it impossible to load all relevant context. This limitation gave rise to approaches like Retrieval-Augmented Generation (RAG) in

→ View original post on X — @whats_ai,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *