not really, on a local network-1 gigabit, not even a 10 gigabit one-any latency is negligible in the context of llms related:
@theahmadosman
-
Local LLMs 101: Understanding GPU Processing and System Architecture
By
–
– local llms 101 – tired of guides that just tell you to run a script and call it a day?
– want to actually know what your GPU is doing, not just trust a black box?
– here's what really happens when you run a local LLM
– what gets loaded, why, and how it all fits together
– no -
Local LLMs Tutorial: Runtime Fundamentals Explained
By
–
new tutorial just dropped covering how local llms work and all the runtime fundamentals
-
Local LLMs 101: Understanding GPU Processing and Architecture
By
–
– local llms 101 – tired of guides that just tell you to run a script and call it a day?
– want to actually know what your GPU is doing, not just trust a black box?
– here's what really happens when you run a local LLM
– what gets loaded, why, and how it all fits together
– no -
Local AI and GPU Acquisition Strategy Before AGI
By
–
hmm i am talking about this specifically he's right about local ai that's why, despite it being half-a-joke, i say "before AGI arrives, acquire GPUs" in my tweets
-
Learning LLM Attention Mechanisms: From Zero to Expert
By
–
– you are
– a random CS grad with 0 clue how LLMs work
– get tired of people gatekeeping with big words and tiny GPUs
– decide to go full monk mode
– 2 years later i can explain attention mechanisms at parties and ruin them – here’s the forbidden knowledge map
– top to bottom, -
Corporate Self-Interest Misalignment With User Welfare
By
–
a self-serving corporate doesn't have the best intentions for you except by coincidence
-
Nvidia Market Cap Surpasses Entire Pharmaceutical Sector
By
–
nvidia market cap > all pharma combined