Ragflow v0.16 deployment practice

Written by
Audrey Miles
Updated on:July-15th-2025
Recommendation

RAGFlow v0.16 deployment guide, covering Docker and Kubernetes deployment techniques.

Core content:
1. Hardware requirements and steps for deploying RAGFlow with Docker
2. Kubernetes deployment documentation and large-scale deployment advantages
3. Various ways and configuration techniques for integrating RAGFlow with LLM

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

RAGFlow deployment method


  • Deployment based on Docker:
    • Prerequisites: Certain hardware configurations must be met, such as CPU ≥ 4 cores, RAM ≥ 16 GB, Disk ≥ 50 GB, and Docker ≥ 24.0.0 and Docker Compose ≥ v2.26.1 must be installed.
    • Operation steps: First, clone the RAGFlow repository, enter its Docker folder, and start the server using the pre-compiled Docker image. After the startup is successful, visit the specified address through the browser to register and log in.
  • Deployment based on Kubernetes: RAGFlow officially provides documentation for Kubernetes deployment. This method is suitable for deploying RAGFlow in a large-scale distributed environment, which can better manage resources and expand services.


RAGFlow and LLM integration

  • Integration via Ollama:
    • Install Ollama: Ollama is an open source tool for running large language models locally. It can create, run and manage multiple models locally. After the installation is complete, use the ollama pull command to download the required LLM model.
    • Add LLM model in RAGFlow: Find the model provider in the RAGFlow interface, select Ollama and add the model. If RAGFlow is deployed as Docker, you need to map the address outside of Docker.
  • Integration via API:
    • Configure the LLM's API: In the RAGFlow interface, click the logo in the upper right corner to enter the Model providers page, click the required LLM and update its API key.
    • Select the default model: Click System Model Settings to select the default dialog model, embedded model, etc.
  • Integration through locally deployed LLMs: RAGFlow supports deploying LLMs for local use. In addition to Ollama, you can also use Xinference or LocalAI, but the specific integration method needs to be operated according to the documentation of these tools.


Mirror Acceleration

docker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/docker.elastic.co.elasticsearch.elasticsearch:8.11.3 && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/docker.elastic.co.elasticsearch.elasticsearch:8.11.3 docker.elastic.co/elasticsearch/elasticsearch:8.11.3docker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/infiniflow.ragflow:v0.16.0 && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/infiniflow.ragflow:v0.16.0 infiniflow/ragflow:v0.16.0docker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/infiniflow.ragflow:v0.16.0-slim && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/infiniflow.ragflow:v0.16.0-slim infiniflow/ragflow:v0.16.0-slimdocker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/mysql:8.0.39 && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/mysql:8.0.39 mysql:8.0.39docker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/valkey.valkey:8 && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/valkey.valkey:8 valkey/valkey:8docker pull registry.cn-hangzhou.aliyuncs.com/megadotnet/quay.io.minio.minio:RELEASE.2023-12-20T01-00-02Z && docker tag registry.cn-hangzhou.aliyuncs.com/megadotnet/quay.io.minio.minio:RELEASE.2023-12-20T01-00-02Z quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z


infiniflow/ragflow:v0.16.0 image takes up 9GB of disk space

$ sudo sysctl -w vm.max_map_count=262144

$ cd ragflow $ docker compose -f docker/docker-compose.yml up -d

Installation Screenshots



 

document


https://ragflow.io/docs/dev/

Initial resource situation

 

Configure APIKEY


 

 

Q&A Dataset



The Q&A questions and answers are written into an EXCEL table (the first column is the question, the second column is the answer)


Summarize


1. Data security and privacy protection

Data Isolation

By deploying Ragflow through private Docker, physical isolation of data can be achieved, ensuring that enterprise private data will not be leaked to external parties or third parties.

Privacy Protection

For sensitive data or privacy information, private Docker deployment provides higher security and avoids the risk of data transmission and storage on public networks.

2. Efficient use of resources

Save network bandwidth

A private Docker repository can avoid downloading images from an external repository every time, thus saving network bandwidth resources.

Improve access speed

Since private warehouses are usually deployed on the company's internal network, Ragflow can be accessed faster, improving work efficiency.


3. Flexibility and Scalability

Customized requirements

Private Docker deployment allows enterprises to customize and optimize Ragflow according to their own needs to meet specific business requirements.

Scalability

As your business grows, private Docker deployments can easily scale and expand Ragflow's size and performance to accommodate growing data volumes and user demands.

4. Integration and Collaboration

Integration with other systems

Private Docker deployment makes it easier for Ragflow to integrate with other enterprise systems, enabling data sharing and collaborative work.

Teamwork

With private Docker deployment, team members can more easily share and collaborate on Ragflow-related tasks and data.

5. Cost Savings

Reduce long-term costs

While the initial cost of a private Docker deployment may be higher, in the long run it can reduce overall costs by avoiding the fees for using external repositories and the potential risk of data breaches.

Improve ROI

Private Docker deployment can improve Ragflow's return on investment (ROI) through efficient resource utilization and customized configuration