I would ditch Word2Vec; the embeddings learned by LLMs are much better. For sentiment classification, you can start with DistilBERT as a base model and tune a few layers (see https://
magazine.sebastianraschka.com/p/finetuning-l
arge-language-models
…) 1/2
Ditching Word2Vec for Superior LLM Embeddings in Classification
By
–
Leave a Reply