DeepSeek local alternative: Ollama's hidden tricks revealed

Master Ollama and make your AI model run more efficiently!
Core content:
1. Introduction to the Ollama open source project and its advantages
2. LLMs supported by Ollama and usage scenarios
3. Practical guide: Detailed steps to customize the Ollama installation path
Recently, the domestic AI model DeepSeek has become popular, and more and more people have begun to try to deploy large language models ( LLM) locally, which not only protects privacy but also saves costs. However, when using Ollama, many novices are often discouraged by the default installation path occupying the C drive and not being able to load local models. This article will teach you how to customize the installation path and cut local model files in seconds , so that AI can truly become your personal computing assistant!
What is Ollama?
? Main features:
Local operation: Support running LLMs on a personal computer or server to ensure data privacy and security.
Model support: Compatible with multiple open source LLMs, such as LLaMA, GPT-J, etc.
Ease of use: Provides a simple interface to facilitate users to quickly deploy and use models.
Cross-platform: Supports operating systems such as macOS, Linux, and Windows.
Resource optimization: Optimize the performance of the model on the local device through technologies such as quantization.
? Use scenarios:
Privacy protection: When processing sensitive data, run it locally to avoid data leakage.
Offline use: Run the model without an internet connection.
Custom development: Developers can build customized applications based on Ollama.
Study and Research: Students and researchers can use it for experiments and project development.
How to customize the Ollama installation path?
The default installation path of Ollama is on the C drive, and the installation path cannot be selected during exe installation. The following 3 steps will help you customize the installation of Ollama: A life-saving guide for those in need of a C drive.
1️⃣ Step 1: Open the folder where the installation package is located and enter cmd to enter the command line
2️⃣Step 2: Create the installation path folder Ollama
For example, create a folder in H:\Ollama
3️⃣Step 3: Enter the installation command to complete the installation
OllamaSetup.exe /DIR=H:\Ollama
Customize the installation path of the large language model
Create a system environment variable OLLAMA_MODELS and set the variable value to the model storage location, eg. H:\Ollama\models
How to migrate previously downloaded models?
The default storage path of the model is the .ollama folder in the user's home directory. For example, mine is stored in the C:\Users\TAOjay\.ollama\models folder. Copy the contents of the folder to the folder set by OLLAMA_MODELS .
Run the command to check whether the migration is successful:
> ollama list# OUTPUT NAME ID SIZE MODIFIEDdeepseek-r1:7b 0a8c26691023 4.7 GB 13 hours agodeepseek-r1:1.5b a42b25d8c10a 1.1 GB 38 hours ago
Load the locally downloaded GGUF model
For example, download the DeepSeek -model file DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf:
Create a file named Modelfile in this path with the following content:
FROM DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf
ollama create deepSeek-r1:7b -f H:/LLM_MODELS/Modelfile
ollama run deepSeek-r1:7b
To view the Modelfile for a given model, useollama show --modelfile
Order.