Tools & Experiments

Ollama

local-llm open-source self-hosted

Run open-source LLMs locally with one command. No GPU required.

Ollama makes running local LLMs dead simple. ollama run llama3 and you’ve got a chatbot running on your own hardware. No API keys, no cloud, no data leaving your machine.

The model library is solid. Llama 3, Mistral, Gemma, Code Llama, Phi. All downloadable and runnable in one command. It handles the quantisation and memory management so you don’t have to.

Performance depends on your hardware. On a decent CPU you’ll get usable speeds with smaller models. A GPU obviously helps but isn’t required. 7B models on modest hardware are surprisingly capable for local work.

Why we use it: Quick experiments without burning API credits. We test new open-source models the day they drop.

Verdict: The Docker of LLMs. Should be on every AI developer’s machine.

No lab tool for this one yet. Browse the lab for interactive tools.