hey Andrew, are you running a local model? If so, what one, and at what size? and what was the performance like? i've been a dissappointed with gpt-oss-120B and Gemma4-31B, so i was thinking about trying a larger one like Qwen-235B
Local Model Performance Comparison: GPT-OSS, Gemma4, Qwen
By
–
Leave a Reply