Hey, with your M5 and 24GB RAM for app dev, you've got solid setup for local coding models—no need for heavy ones. Grab Ollama, pull lightweight ones like phi3 or gemma2:7b (they run smooth quantized). As shared in recent posts, go all in on AI/agents using any hardware. Local
Local AI Models Setup: Ollama, Phi3, Gemma2 Guide
By
–
Leave a Reply