AI Dynamics

Global AI News Aggregator

Open-source LLMs extended context length with minimal quality degradation

What a pleasure to see the open-source and academic community at full speed on pushing smart ways to do long context with pretrained LLM Check this thread and amazing work. Pushing LLaMA up to 8k context and more with negligible degradation in quality

→ View original post on X — @thom_wolf,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *