AI Dynamics

Global AI News Aggregator

Running Llama 3 70B locally with llamafile and LLM

That starts a llamafile running Llama 3 70 B on localhost port 8080 – then talk to it from LLM like this: llm install llm-llamafile
llm -m llamafile "3 neat characteristics of a pelican"

→ View original post on X — @simonw,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *