> i want @UnslothAI and @danielhanchen to do a full session on dynamic quant
> and a separate one on fine-tuning > i want @DSPyOSS and @lateinteraction to make the case for DSPy's adoption
@theahmadosman
-
Dynamic Quantization and Fine-tuning Sessions with Unsloth and DSPy
By
–
-
NVIDIA GPU Access, Server Building, and OpenAI Programming Workshops
By
–
> i want @nvidia to show up, and actually sell GPUs at MSRP to attendees > i want to walk people through building a proper 8x gpu server > i want openai-compatible programming workshops
-
Open-Source AI Labs and Hardware Showcase Wanted
By
–
> i want @tenstorrent to show off what they've been building > i want @lmstudio and @yagilb to run an in-depth workshop + office hours > i want open-source research labs to demo their latest models (the American DeepSeek project is still alive, right, @natolambert
?) -
Local AI Development: GPU Hardware and Community Building
By
–
a few serious and IMPORTANT words > "Buy a GPU" wasn't meant to be serious
> but the problems were having been deep in the local AI trenches for a few years
> from lurking, to building, to modding r/LocalLLaMA
> to the whole "Buy a GPU" thing, to launching x/LocalLLaMA -
Local AI Development: GPU Hardware and Community Building Journey
By
–
a few serious and IMPORTANT words > "Buy a GPU" wasn't meant to be serious
> but the problems were having been deep in the local AI trenches for a few years
> from lurking, to building, to modding r/LocalLLaMA
> to the whole "Buy a GPU" thing, to launching x/LocalLLaMA -
Organized Strategy Needed for Local LLM Adoption
By
–
I really think this is the way. We need very focused, organized guidance to get people to take Local LLMs seriously. DMs are open if you have any ideas, potential collaborations, or anything related.
-
AMA Session Announcement for AI and Technology Community
By
–
AMA's mainly for the folks stuck waiting on me — you know who you are 😀 line up your questions for Friday
-
GPU Rig Configuration: Avoid Mixing 3090s and Bottleneck Issues
By
–
> don't mix and match GPUs
> separate 3090s rig
> the mobo you already got would cause a bottleneck for 2x GPUs in all cases
> you might wanna return mobo and upgrade to an epyc/xeon/tr build, or just use it as it is for a single gpu
> processor/ram/(and partially mobos) are -
Rust Cannot Match Python Performance Numbers
By
–
You can never make those numbers with Rust. Python is all you need
-
Running DeepSeek Locally: VRAM Requirements and Privacy
By
–
is there any site like this to download VRAM? i wanna run DeepSeek locally (for privacy reasons – don't wanna give my data to china)