Here’s my list: 1. Transformers + TRL – for fine-tuning/ RL on smol LLMs
2. vLLM/ MLX/ llama.cpp for serving them locally – fun for local projects
3. Take it one step further with transformers.js w/ WebGPU backend
4. For remote LLMs w/ agents – OpenAI Agents SDK is pretty dope
Fine-tuning and Serving LLMs: Essential Tools and Frameworks
By
–
Leave a Reply