For sure. And indeed CV pre-training was a key inspiration for ULMFiT. AFAIK there wasn't previous examples of using language modeling on a general purpose corpus as a self-supervised task then fine-tuning that in two more steps for downstream tasks (i.e like today's LLMs).
ULMFiT Language Modeling Self-Supervised Pre-training Innovation
By
–
Leave a Reply