Don’t be fooled by “one-click deployment”! A guide to avoiding pitfalls when deploying DeepSeek locally using Ollama

One-click deployment is not easy. Ollama helps you build DeepSeek efficiently.
Core content:
1. Introduction to the Ollama tool and its role in DeepSeek deployment
2. Comparison table of video memory requirements for different DeepSeek model versions
3. Detailed steps: Ollama installation, configuration and solutions to common problems
DeepSeek is the hottest topic recently. There are a lot of articles about AI model deployment on the Internet, and they often say "one-click deployment, easy to get started", "30 minutes to build a large model that can run locally". But the reality is that when you really use Ollama to build a DeepSeek model, it is simply a big adventure! Today, let's talk about those pitfalls and help you successfully build your own DeepSeek.
What is Ollama
Ollama is our right-hand assistant in building models. It is like an intelligent construction worker that can help us quickly and easily deploy various large language models. With Ollama, we do not need to master complex technical knowledge, just a few simple commands to run the model locally. It also provides a simple and easy-to-use interface, allowing us to interact with the model easily, just like chatting with friends.
ollama address: www.ollama.com/download
Requirements for different model versions
The following is a rough reference table of video memory requirements:
This table can help you quickly select the DeepSeek version that suits your needs.
Ollama installation and configuration detailed steps
Pre-installation preparation
Before installing Ollama, make sure your system meets the basic requirements. Generally, Ollama supports mainstream operating systems such as Linux, macOS, and Windows . At least 32GB of memory and enough storage space. I use Ubuntu Server, 32 cores, 64G memory, P100 (computing card).
At the same time, make sure that the necessary dependencies have been installed on the system. For example, you may need to install some basic development toolkits on a Linux system.
Step 1: Install Ollama
Installing Ollama on a Linux system is relatively simple. Open a terminal and enter the following command:
curl https://ollama.com/install.sh | sh
This command will download the installation script from Ollama's official server and execute it.
The first pitfall encountered
Normally, the installation can be successful through the above steps. However, during the installation, the network condition becomes a "stumbling block", causing the download progress of Ollama to be very slow, and even a problem prompt "Error: pull model manifest" will pop up, which is undoubtedly worse, making the already difficult installation process even more difficult. But don't panic, the solution is here:
1. Download ollama_install.sh and save it
curl -fsSL https://ollama.com/install.sh -o ollama_install.sh
2. Use github files to speed up and replace the github download address
sed -i 's|https://ollama.com/download/ollama-linux|https://gh.llkk.cc/https://github.com/ollama/ollama/releases/download/v0.5.7/ollama-linux|g' ollama_install.sh
3. Add executable permissions after replacement
chmod +x ollama_install.sh
4. Execute sh to download and install
sh ollama_install.sh
After the installation is complete, you can verify whether the installation was successful by running the following command:
ollama --version
If the Ollama version information is displayed correctly, it means the installation is successful.
Step 2: Start the model
Enter the following command to start the model:
ollama run deepseek-r1:14b
This step was generally smooth and there were no major problems. The only annoying thing was that the first time I downloaded, the speed was as slow as an old ox pulling a cart. But don't worry, as long as the download is successful, the rest will be smooth. If the download fails, don't be discouraged, just try a few more times, maybe one of them will work.
When you see some information output in the terminal, it means the model has been successfully started! At this time, you can start chatting with DeepSeek.
Step 3: Install chatbox
The black window of cmd is really frustrating to use. Is there any UI interface that can be more convenient to use? If you want to use it with Ollama and have a full experience of UI interface, the first choice is Ollama's official docs.openwebui.com . If you are familiar with Docker deployment or know a little bit of code, you can refer to the relevant tutorials to install it manually and experience the fun of technical operation.
If you just want to keep things simple, download a client, and use it with a few clicks, then chatbox is the right choice. The installation method is no different from installing other software, just double-click to run the installer, it's super worry-free. Click this link chatboxai.app/zh to download it, go and try it!
The second pitfall encountered
After successfully installing Ollama and launching DeepSeek on the server, we encountered the second problem. The interface can be called normally through the local command line, but when trying to access the DeepSeek interface on the server from an external computer, it always fails.
After consulting the official documentation, I found that to achieve external network access, it is necessary to modify the configuration of Ollama, so that its listening address is 0.0.0.0 and the port is changed. The specific steps are as follows:
Step 1: Modify the file
vim /etc/systemd/system/ollama.service
Step 2: Add configuration
[Service]Environment="OLLAMA_HOST=0.0.0.0:11434"
Step 3 : Reload
sudo systemctl daemon-reloadsudo systemctl restart ollamasudo systemctl status ollama
After the configuration is complete, visit http://serverIP:11434/ in the browser and it will return “Ollama is running”, which means the configuration is successful. Next, set the address of Ollama in the chatbox.
After completing the above steps, we can use Ollama to deploy the DeepSeek large model locally.
at last
Although the process of building it was full of pitfalls, when I finally successfully deployed the DeepSeek model and saw it running normally and giving accurate answers, the sense of accomplishment was indescribable. It was like finally reaching the top of the mountain after going through so much hardship and seeing the beautiful scenery.
"The ...
I have organized all the content about microservice architecture in my new book "Learn Spring Cloud Microservice Architecture from Scratch". This book is now available in major online and offline bookstores, and I look forward to your purchase! Let's join hands in this wonderful feast of learning microservice architecture! ?