Ollama day-1 guide
After installing Ollama, quickly start using local AI models on Linux or macOS with a few lean commands in the terminal.
Installing and First Use
- Install via shell:
curl https://ollama.ai/install.sh | shThis sets up Ollama on Linux/Mac. - Run your first model:
ollama run llama3.2:3bThis downloads and runs the selected model, opening an interactive chat.
Quick Reference Table
| Command | Function |
|---|---|
ollama run model_name |
Run model in chat |
ollama list |
See downloaded models |
ollama pull model_name |
Download a new model |
ollama rm model_name |
Delete a model |
ollama show model_name |
Model details |
ollama cp src dest |
Copy model to new name |
/bye (during run) |
Exit chat session |
/set system "prompt" (during run) |
Set one-off system prompt |
Customizing an Ollama Model
To customize an Ollama model—editing it, setting a system prompt, and saving as a new model—use the following lean workflow and command examples.
-
Export the Current Modelfile
Use the original model as a template:
ollama show llama3 --modelfile > MyModelfileThis creates a file named
MyModelfilewith the current model's configuration. -
Edit the Modelfile and Set a System Prompt
Open
MyModelfilein any text editor (e.g., nano, vim, code):nano MyModelfileIn the file, adjust parameters and add/change a system prompt. For example:
FROM llama3 PARAMETER temperature 0.7 SYSTEM """ You are a Linux and Bash expert. Only provide concise, correct code with clear explanations. """
-
Build Your Custom Model
Create a new model from the customized Modelfile:
ollama create bash-expert -f MyModelfileThis saves the tailored model under the name
bash-expert. -
Run the Customized Model
ollama run bash-expertThe model will now always respond using your provided system prompt.
In-Session Quick System Prompt (One-off)
During an interactive chat:
/set system "You are a SQL expert. Explain queries before showing code."
Applies the system prompt for just that session.
This process lets you efficiently create, guide, and save your own specialized Ollama models for targeted tasks.