AI Dynamics

Global AI News Aggregator

Less Data Better Results Finetuning LLMs with Trimmed Datasets

Next to LIMA, this is another interesting paper highlighting that more data is not always better when finetuning LLMs: https://
arxiv.org/abs/2307.08701 Trimming the orig 52k Alpaca dataset to 9k can improve the performance when finetuning 7B and 13B parameter LLMs.

→ View original post on X — @rasbt,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *