Before GPT-3, we fine-tuned a task-specific neural network (e.g., BERT) for each task that we wanted to solve.
With GPT-3, there was a paradigm shift to using a single large language model for any task via few-shot prompting.
GPT-3 paradigm shift from task-specific models to few-shot prompting
By
–
Leave a Reply