Rig Agents: A high-level LLM orchestration framework

Explore the efficient management and integration strategies of Rig Agents in LLM application development.
Core content:
1. The core concept and value of Rig Agents
2. Analysis of the core structure and components of Agents
3. Usage methods and sample codes in actual application development
In LLM application development, how to efficiently manage models, contexts, and tools to build powerful agents? Rig provides a high-level LLM orchestration framework to help developers easily integrate RAG (retrieval augmented generation) , tool calls , and custom configurations . This article will deeply analyze the core concepts, usage, and best practices of Rig Agents to help you build AI applications from basic chatbots to complex RAG knowledge question-and-answer systems.
What are Rig Agents?
Rig Agents is the core component of the Rig framework for LLM high-level encapsulation, which provides a modular and extensible way to manage the behavior of AI agents. An Agent combines models, contexts, tools, and configurations to adapt to various AI application scenarios.
The core structure of Agent
An Agent mainly consists of the following parts:
1. Basic components
Completion Model (such as GPT-4, Claude)
System Prompt
Configuration (temperature, maximum token, etc.)
2. Context Management
Static context : documentation that is always available
Dynamic context : Retrieve relevant information from the knowledge base via RAG
Vector storage integration : for semantic search
3. Tool Integration
Static tools : capabilities that are always available (e.g. calculator, translator)
Dynamic tools : tools provided dynamically based on contextual needs
ToolSet unified management tool
How to use Rig Agents
1. Create a basic Agent
The following code shows how to use Rig to create a simple AI assistant:
use rig::{providers::openai};
use rig::completion:: Prompt ;
#[tokio::main]
async fn main() -> Result <(), Box <dyn std::error:: Error >> {
let client = openai:: Client ::from_url ( "ollama" , "http://localhost:11434/v1" );
let agent = client.agent( "qwen2:7b" )
.preamble( "you are a helpful assistant." )
.build();
let response = agent.prompt( "halo" ).await?;
println!( "{:#?}" , response);
Ok (())
}
The results are as follows:
2. Create RAG Agent
RAG (Retrieval-Augmented Generation) allows AI to dynamically retrieve information from the knowledge base and improve the accuracy of answers.
use rig ::{ providers :: openai , Embed };
use serde :: { Serialize };
use rig :: completion :: Prompt ;
use rig :: embeddings :: EmbeddingsBuilder ;
use rig :: vector_store :: in_memory_store :: InMemoryVectorStore ;
#[derive(Embed, Serialize, Clone, Debug, Eq, PartialEq, Default)]
struct WordDefinition {
id: String,
word: String,
#[embed]
definitions: Vec<String>,
}
#[tokio::main]
async fn main () -> Result<(), Box<dyn std:: error :: Error >> {
let client = openai:: Client :: from_url ( "ollama" , "http://localhost:11434/v1" );
let embedding_model = client. embedding_model ( "qwen2:7b" );
let embeddings = EmbeddingsBuilder :: new (embedding_model. clone ())
. documents (vec![
WordDefinition {
id : "doc0" . to_string (),
word : "flurbo" . to_string (),
definitions : vec![
"1. *flurbo* (name): A flurbo is a green alien that lives on cold planets." . to_string (),
"2. *flurbo* (name): A fictional digital currency that originated in the animated series Rick and Morty." . to_string ()
]
},
WordDefinition {
id : "doc1" . to_string (),
word : "glarb-glarb" . to_string (),
definitions : vec![
"1. *glarb-glarb* (noun): A glarb-glarb is an ancient tool used by the ancestors of the inhabitants of planet Jiro to farm the land." . to_string (),
"2. *glarb-glarb* (noun): A fictional creature found in the distant, swampy marshlands of the planet Glibbo in the Andromeda galaxy." . to_string ()
]
},
WordDefinition {
id : "doc2" . to_string (),
word : "linglingdong" . to_string (),
definitions : vec![
"1. *linglingdong* (noun): A term used by inhabitants of the far side of the moon to describe humans." . to_string (),
"2. *linglingdong* (noun): A rare, mystical instrument crafted by the ancient monks of the Nebulon Mountain Ranges on the planet Quarm." . to_string ()
]
},
])?
.build ( )
.await?;
let vector_store = InMemoryVectorStore :: from_documents (embeddings);
let index = vector_store.index ( embedding_model );
let rag_agent = client.agent ( " qwen2:7b" )
.preamble ( "
You are a dictionary assistant here to assist the user in understanding the meaning of words.
You will find additional non-standard word definitions that could be useful below.
")
. dynamic_context ( 1 , index)
.build () ;
let resp = rag_agent. prompt ( "what does \"glarb-glarb\" mean?" ).await?;
println!( "{}" , resp);
Ok (())
}
In this agent, every time a user asks a question, LLM automatically queries the vector storage, retrieves the most relevant documents , and sends them to the model together with the question to improve the professionalism and contextual relevance of the answer.
⚠️Note: You need to enable the derive feature when installing rig-core, otherwise you will get an error saying that the Embed macro cannot be found.
The results are as follows:
3. Create an Agent with Tools
In addition to RAG, Rig also supports Tool -Augmented mode, which enables Agent to call external APIs or perform calculations.
use rig ::{ Agent , Tool };
// Create an Agent that supports tool calls
let agent = openai. agent ( "gpt-4" )
. preamble ( "You are a capable assistant with tools." )
.tool (calculator) // static tool
.tool ( web_search) // static tool
.dynamic_tools ( 2 , tool_index, toolset) // Dynamic tools
.build () ;
The role of the tool system :
Static tools : core capabilities that are always available, such as mathematical calculations, weather queries, etc.
Dynamic tools : Based on semantic search, provided on demand, such as smart contract query, database query, etc.
Core Features of Rig Agents
1. Dynamic context parsing
In the RAG scenario, the Agent needs to:
Analyze user questions
Query the vector store to find the most relevant documents
Combine the search results to generate the final answer
2. Tool Management
Agent's tool system supports:
Static & Dynamic Tool Management
Automatic parsing of LLM tool calls
Error handling and exception recovery
Best Practices
1. Context Management
Keep static context to a minimum to reduce token consumption.
Using dynamic context (RAG) to process large-scale knowledge base and improve question answering quality.
2. Tool Integration
Static tools are suitable for core functions such as mathematical calculations, translation, etc.
Dynamic tools are suitable for context-sensitive functions such as API queries and database operations.
Ensure that your tool has adequate error handling to prevent runtime crashes.
3. Performance Optimization
Adjust the number of samples for dynamic content to reduce unnecessary calculations.
Adjust the temperature parameter according to the task requirements , such as 0.3-0.5 for question-answering tasks and 0.7-0.9 for creative writing tasks .
Monitor token usage to avoid exceeding LLM limits.
Common application patterns
1. Conversational AI Assistant
let chat_agent = openai.agent("gpt-4") .preamble("You are a conversational assistant.") .temperature(0.9) .build();
2. RAG Knowledge Base
let kb_agent = openai.agent("gpt-4") .preamble("You are a knowledge base assistant.") .dynamic_context(5, document_store) .temperature(0.3) .build();
3. Tool-centric Agent
let tool_agent = openai.agent("gpt-4") .preamble("You are a tool-using assistant.") .tool(calculator) .tool(web_search) .dynamic_tools(2, tool_store, toolset) .temperature(0.5) .build();
Summarize
Rig Agents provides a modular and extensible way to manage LLM applications, supporting RAG, tool calls and flexible configuration, making it an ideal choice for building advanced AI agents. Whether building a chat assistant, a knowledge base question-and-answer system, or a tool-enhanced intelligent agent, Rig can provide strong support.