Obsidian AI Best Practices: OpenRouter integrates 90+ large models with zero threshold

Written by
Caleb Hayes
Updated on:June-27th-2025
Recommendation

Obsidian AI and OpenRouter have joined forces to create a new intelligent knowledge management experience that allows local seamless calls to 90+ large models.

Core content:
1. OpenRouter gateway integrates 93+ large models to achieve seamless cross-model workflows
2. Quick configuration in 5 minutes, visually adjust parameters, and get free credits
3. With the chatgpt-md plug-in, conversation records are embedded in Markdown notes, doubling efficiency

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

Are you still manually switching AI platforms to process your notes?Obsidian+OpenRouterThe golden combination helps you seamlessly call 93+ large models such as Claude and GPT-4 locally. 5 minutes of configuration can achieve: automatic connection of cross-model workflows, visual parameter adjustment,Free quotaYou can alsoBypassing geographical restrictions.Cooperate
chatgpt-mdPlug-in, conversation records are directly embedded in Markdown notes, doubling the efficiency of knowledge management.Personal AI plug-in pitfall record, teach you how to build your own intelligent knowledge base.

Obsidian AI Best Practices: OpenRouter integrates 90+ large models with zero threshold

As an Obsidian Deep Notes user, do you also face these pain points?

  • • After organizing your notes, you need toJump SwitchOnly third-party platforms can call on AI for summary analysis?
  • • Want to experience top models such as Claude/GPT-4, butLimited by networkAccess barriers?
  • • When configuring the ObsidianAI plugin, do you always need to consult different API documents to confirm the model name?
  • • Model switching requires manual configuration changes, including temperature/top_p, etc.Core parametersNowhere to adjust?

Today we introduce a new solution: Through the OpenRouter gateway, you can: ✓ Integrate 47+ mainstream AI service providers (Google/OpenAi/DeepSeek/Ollam, etc.) ✓ One-click call 93+ mainstream large models (GPT-4/Claude/DeepSeek, etc.) ✓ Eliminate the troubles of regional restrictions and API key management ✓ Visually adjust all conversation parameters ✓ Achieve true seamless integration of cross-model workflows




The following article will explain in detail how to build your all-round AI knowledge assistant in 5 minutes and be efficient together.

How to use the large model

  1. 1. App or website : Many domestic large-model apps (such as Doubao and Kimi) provide completely free services. Some foreign products (such as OpenAI and Gemini) also have free trials of suboptimal models, but the kx Internet access problem needs to be solved.
  2. 2. API integrates the big model interface. In the big model cloud service, just register and add a new API key;

Benefits of API calls to large models

APP is suitable for ordinary users, while API provides more flexibility and functions and is suitable for users who need deep customization.

  1. 1. API can be deeply integrated with workflow to support automated tasks.
  2. 2. Provide a larger input context window to overcome the input limitations of the App.
  3. 3. Facilitates seamless switching between different models and improves flexibility.

What do I do with the API Big Model?

  1. 1. Generate a summary of your diary/notes
  2. 2. When you first learn an unfamiliar field, initialize a knowledge graph and then delve deeper into each point yourself;
  3. 3. Use structured methods to restructure my messy oral expressions
  4. 4. Provide modification suggestions for output content

Commonly used large model API service providers

  1. 1. OpenRouter
  • • Features: A wide variety of models, covering both commercial and open source options.
  • • Note: The free quota is relatively limited, and only some models are marked with the word "free".
  • 2. Siliconflow
    • • Features: Provide free Qwen, gemma and SD (Shengtu) series models.
    • • Advantages: Domestic users can access it directly without kx internet access.
  • 3. Google-AI-Studio (need kx internet access)
    • • Features: It provides its most advanced models (such as the Gemini series), with 1,500 free requests per day and a maximum rate limit of 1 million tokens and 15 requests per minute.
    • • Advantages: Individual users usually enjoy a credit limit that never expires.
  • 4. groq (need kx to access the internet)
    • • Features: Supports llama series and gemma series, most models provide a free quota of 500k tokens per day.

    How to use the Big Model API for free

    This article focuses on teaching the public how to use Openrouter to fleece the capitalists. The above service providers can access it. OpenRouter is a large model API router designed to connect various AI model services.
    Integrated into a unified interfaceOpenRouter lowers the threshold for using AI technology, allowing more people to easily
    Choose the right large modelSolve practical problems.

    Openrouter model ranking

    Model token rankings: https://openrouter.ai/rankings

    You can select the best model for trial by ranking the tokens called by the large model
    Ollama, ChatGPT 20250419151238" class="rich_pages wxw-img" data-ratio="1.1642764015645373" data-type="jpeg" data-w="767" style='box-sizing: border-box;border-width: 0px;border-style: solid;border-color: hsl(var(--border));display: block;vertical-align: middle;max-width: 100%;height: auto;text-align: left;line-height: 1.75;font-family: Menlo, Monaco, "Courier New", monospace;font-size: 14px;margin: 0.1em auto 0.5em;border-radius: 8px;box-shadow: rgba(0, 0, 0, 0.1) 0px 4px 8px;' title="null" data-imgfileid="100001502">

    Openrouter free use limitations

    • Free model version : limited to ID ending with :free, up to 20 requests per minute.
    • Balance less than $10 : Daily request limit is 50.
    • Account balance above $10 : Daily request limit increased to 1000.

    Openrouter integrates third-party service provider model

    Integration address: https://openrouter.ai/settings/integrations

    For example, I integratedGoogle AI Studio,groq,AnthropicThese service providers have free quotas and can avoid kx Internet access. Of course, there are also domestic ones.
    DeepSeek, it is also cheap and good.

    Configuration Enableuser this key as a fallbackWhen the rate limit is reached or a failure occurs, it will use itself first.API Key

    How to use AI smoothly in Obsidian

    Obsidian AI plugin

    • • Plugin ID: chatgpt-md
    • • Project address: https://github.com/bramses/chatgpt-md
    • • Project Description: Make ChatGPT and Obsidian integrate (almost) seamlessly.
    • • Support for configuring 3 types of models: Each AI service now has its own dedicated URL parameter in settings and metadata
      • • openaiUrl is used forOpenAIAPI
      • • openrouterUrl is used forOpenRouter.ai
      • • ollamaUrl is used forOllamaLocal Model
    • • Features:
      • • Create notes based on prompt word templates
      • • Use yaml parameters as large model parameters, and switch large models from the online list through command modification
      • • Conversation records are stored in notes in the form of role:user and role:assitant

    chatgpt-md usage examples

    ctrl+p, switch model
    Filter deepseek in model list

    You can refer to openrouter's model ranking to select a model

    chatgpt-md Quick Start

    Get started in just a few easy steps:

    1. 1. Install ChatGPT MD : Go to Settings > Community Plugins > Browse ,search ChatGPT MD and click Install;
    2. 2. Add an API key to the openrouter website;
    3. 3. Add OpenAI API key  : Add OpenAI API key in plugin settings, or install Ollama and local LLMs of your choice;
    4. 4. Start chatting  : Use ChatGPT MD: Chat Order (cmd + p or ctrl + p) Start a conversation from any note.

    Adjusting the generation parameters of large models

    Ordinary users can use the default parameters

    Use the yaml parameters in the note header as the large model parameters. You can switch the large model from the online list through command modification

---
system_commands:  [ 'I am a helpful assistant.' ]
temperature: 0.3
top_p: 1
max_tokens: 4096
presence_penalty: 0.5
frequency_penalty: 0.5
stream: true
stransform: translateY( null
n: 1
model: openrouter@deepseek/deepseek-chat-v3-0324

# Service-specific URLs (optional, will use global settings if not specified)
openaiUrl: https://api.openai.com
# openrouterUrl: https://openrouter.ai
# ollamaUrl: http://localhost:11434