CherryStudio configures local Ollama connection to use local mistral dialogue model and bge-m3 embedded model

In-depth analysis of the seamless connection between Cherry Studio and Ollama model, making local AI dialogue more convenient!
Core content:
1. The configuration process of Cherry Studio and Ollama model
2. The application and advantages of local model in Cherry Studio
3. The author's caring guidance and suggestions for technical novices
In yesterday's article, we have already talked about how to download and install Ollama software from the official website of Ollama, configure Windows environment variables, open firewall ports, etc., and finally download and run the "mistral-small3.1" model (this model has LLM reasoning dialogue, vision, and tools functions), and the "bge-m3" model (this model is an embedded model and can be used for knowledge base). Now everything is ready, let's see how to configure Cherry Studio, an "AI assistant" or "AI dialog box" that directly interacts with users, to match the underlying model.
Finally, go to Cherry Studio settings and adjust the "Default Model" to use the "mistral-small3.1:latest" model deployed by Ollama.
Now, you can enjoy the convenience brought by the local model in the Cherry Studio dialog box without restrictions.