Deploy Deepseek large models using Ollama

Written by
Jasper Cole
Updated on:July-12th-2025
Recommendation

Master the whole process of deploying DeepSeek large models with Ollama, from environment construction to model operation, all in one article.

Core content:
1. Prerequisites and environment settings for deploying DeepSeek models
2. Installation and startup commands for Ollama and DeepSeek models
3. Ollama configuration under Windows environment and installation guide for Open WebUI

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Deploy Deepseek large models using Ollama

Prerequisites 

Download cuda driver using NVIDIA graphics card
https://developer.nvidia.com/cuda-downloads

Ollama 

Ollama official version: https://ollama.com/

My graphics card is on a Windows computer, so I use the Windows installation method to install it.
If your graphics card is on Linux, you can use the following command to install it

curl -fsSL https://ollama.com/install.sh | sh

Of course, Ollama can not only start the deepseek model, but also start his model
https://ollama.com/search

#Model  installation command

#
 1.5B Qwen DeepSeek R1  
#The  required space is about 1.1G
ollama run deepseek-r1:1.5b

#
 7B Qwen DeepSeek R1
#The  required space is about 4.7G
ollama run deepseek-r1:7b

#
 8B Llama DeepSeek R1
#The  required space is about 4.9G
ollama run deepseek-r1:8b

#
 14B Qwen DeepSeek R1
#The  required space is about 9G
ollama run deepseek-r1:14b

#
 32B Qwen DeepSeek R1
#The  required space is about 20G
ollama run deepseek-r1:32b

#
 70B Llama DeepSeek R1
#The  required space is about 43G
ollama run deepseek-r1:70b

#
 671B Llama DeepSeek R1
#The  required space is about 404G
ollama run deepseek-r1:671b

#
 Windows environment variable monitoring
#  OLLAMA_HOST 0.0.0.0

#Start
 command
ollama serve

Open WebUI 

Official installation documentation: https://docs.openwebui.com/

Open WebUI official website document translation:

Notice:

When installing Open WebUI using Docker, make sure to include the following in the Docker command:

-v open-webui:/app/backend/data

This step is crucial as it ensures that your database is mounted correctly, avoiding any data loss.

Install default configuration

1. If you have Ollama installed on your computer, you can use the following command:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

2. If Ollama is on another server, use the following command:

When connecting to Ollama on another server, change OLLAMA_BASE_URL to the URL of your server:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

To run Open WebUI with Nvidia GPU support, use the following command:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

3. Installation for OpenAI API use only

If you are only using the OpenAI API, use the following command:

docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main 

4. Open WebUI installation with bundled Ollama support

This installation method uses a single container image that bundles Open WebUI with Ollama, enabling simplified setup with a single command. Choose the appropriate command based on your hardware setup:

Use GPU support: Take advantage of GPU resources by running the following command

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama 

CPU only: If you are not using a GPU, use the following command instead:

docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama 

Both commands come with built-in, easy installation support for Open WebUI and Ollama, ensuring you can get everything up and running quickly.

#The  commands I used
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://192.168.1.100:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Visit: http://192.168.1.120:3000