Install guides
Step-by-step guides for getting AI running on your own machine.
Pick the guide that matches how you like to work. If you have never installed AI software before, start with Ollama — it is the smoothest path. If you want a graphical chat interface, start with LM Studio. The llama.cpp guide is for users who like the command line and want maximum control.
- Beginner
Install Ollama and run your first local model
From zero to a working local LLM in about ten minutes, with the commands that actually matter and the gotchas nobody warns you about.
Updated May 202610 min readmacOS, Linux, Windows - Intermediate
Build and run llama.cpp from source
How to compile llama.cpp with the right backend for your hardware, pick a GGUF quantization that fits your RAM, and serve an OpenAI-compatible endpoint.
Updated May 202620 min readmacOS, Linux, Windows - Beginner
LM Studio setup and side-by-side model evaluation
A practical walkthrough for using LM Studio to download, compare and serve local models, plus when LM Studio is the wrong tool.
Updated May 202612 min readmacOS, Linux, Windows
