What a pleasure to see the open-source and academic community at full speed on pushing smart ways to do long context with pretrained LLM Check this thread and amazing work. Pushing LLaMA up to 8k context and more with negligible degradation in quality
Open-source LLMs extended context length with minimal quality degradation
By
–
Leave a Reply