Local deployment of DeepSeek security risk self-inspection and security reinforcement solution

Written by
Silas Grey
Updated on:July-15th-2025
Recommendation

In-depth understanding of the security risks and reinforcement solutions of local deployment of DeepSeek to ensure the safe implementation of AI technology.

Core content:
1. Security risk analysis of local deployment of DeepSeek
2. Security issues caused by Ollama service configuration
3. Specific methods for implementing security self-inspection and reinforcement

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)



Brief Introduction

    With the rapid development of artificial intelligence technology, the demand for deploying large AI models DeepSeek is growing. However, while enjoying the convenience brought by AI technology, we cannot ignore the potential security risks behind it.

    Recently, reports on security risks in local deployment of DeepSeek have attracted attention. Ollama service, as an important open source framework, has attracted attention for security risks caused by its configuration. This article will analyze this issue and provide a security reinforcement solution.


Security risk reasons

What is Ollama?

    Ollama is an open source framework for quickly deploying and running large language models (LLMs). Users can easily and conveniently deploy and run AI models locally, in public clouds, or on private clouds. However, this convenience also brings potential security risks.

The core issue of security risks

    Some deployments of the Ollama service bind an IP address to the 0.0.0.0 address and open TCP port 11434. This means that anyone who can access that IP address can connect to your model over the network.

Attackers can use the following methods to discover the Ollama service port exposed on the Internet:

Use cyberspace mapping, vulnerability scanning, port scanning tools, etc.

    Security risk: Ollama service port exposed to the Internet


Safety risk self-inspection

Method 1

Check the command executed when Ollama starts:

set OLLAMA_HOST=0.0.0.0

Method 2

Check the Ollama config file

"host":"0.0.0.0"

Method 3

Check system environment variables

OLLAMA_HOST

0.0.0.0

    The above three methods are provided for security self-check. If host=0.0.0.0 is found, it is determined that there is a security risk and relevant security reinforcement is required. Refer to the following:

Security hardening
Method 1
Modify IP address
Change the Ollama host address to the local or intranet IP address, for example: 127.0.0.1, 192.168.xx.
Method 2
Windows operating system
Run wf.msc

Windows Defender Firewall

Right click "Inbound Rules" → "New Rule"

Select "Ports" → TCP → Specific local ports: 11434

Select "Block connection" → Apply to profile

Method 3

Debian Ubuntu Kali operating system

Using iptables

sudo iptables -A INPUT -p tcp --dport 11434 -j DROP

Method 4

RedHat CentOS operating system

Using firewall-cmd

sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" port port="11434" protocol="tcp" reject'

sudo firewall-cmd --reload

    The above Windows and Linux system related methods are provided for reference only to close the 11434 service port.
    If you need to open 11434 service to a specific IP address, please write the relevant commands yourself.

Summarize

    By implementing the above security reinforcement measures, you can significantly reduce the security risks of locally deployed DeepSeek services and build a more secure and reliable AI model operating environment. Here are my personal suggestions:

1. Follow the principle of minimization: open only necessary network ports and limit the access scope.

2. Regular inspection and update: Keep an eye on the Ollama version and update it immediately.

3. Multi-layer protection strategy: Combine firewalls, IP whitelists and other means to build a multi-level security protection system.