Connecting knowledge points, real LLM Agents are coming soon

Explore the cutting-edge technology of AI Programming Bootcamp and gain insight into the disruptive potential of LLM agents.
Core content:
1. Rich subject integration and inspiration from AI Programming Bootcamp N8 2.
Explanation of MCP principles and desktop software development
3. Definition, characteristics and future trends of LLM agents
This weekend, we held the N8 AI Programming Training Camp. On site, there were stock trading tools, knowledge tools, scientific research assistants, scenic spot guides, image editors, personal IP positioning, TikTok hot spot tracking, home libraries, document assistants, question-solving assistants , etc. It was a mix of different subject backgrounds , which gave me a lot of inspiration.
In this issue, we focus on the principles and usage of MCP , and also begin to explain how to develop desktop software.
Everyone can make their own desktop software, and software can become a "commodity" that everyone can easily produce.
If you are interested in AI programming, you can follow us on issue N9, or contact me on WeChat: litnmnm
What is an LLM agent? According to Anthropic's definition, an agent is able to dynamically control its own processes and tool usage.
www.anthropic.com/research/building-effective-agents
OpenAI's DeepResearch (released in January 2025) and Claude Sonnet 3.7 are considered early examples of true LLM agents.
Alexander Doria recently published his thoughts on AI agents, arguing that " real LLM agents are coming soon ."
LLM agents are fundamentally different from the common workflow-based systems currently available.
These new agents are able to plan, remember, and efficiently execute multi-step, long-term tasks.
Unlike workflow systems with predefined rules and prompts , true LLM agents can dynamically guide their own processes and tool usage, thus overcoming the limitations of traditional methods in scalability and long-term effectiveness, and are expected to bring disruptive changes in various fields.
To realize a true LLM agent, it is necessary to adopt a method that combines reinforcement learning (RL) and reasoning.
That’s right, we still need to train the model and implement dynamic programming capabilities in the model itself. Currently, the intelligent agents on the market that are implemented through preset and prompt engineering are only transitional forms.
The related article also mentioned another trend,
Market and investment trends, such as the prediction that closed-source AI models will stop API services within 2-3 years , which may have a profound impact on the business model of the AI industry.
How to connect "unknown" knowledge points becomes more important.
In his latest interview, Altman mentioned: In the past, we valued the amount of knowledge accumulated in the brain. The more knowledge a person stored, the smarter he appeared and the easier it was to gain respect. But now, connectors are more valuable than knowledge collectors.
It is true. LLM has mastered a lot of knowledge that we can call upon at any time , but how to connect different knowledge points varies from person to person. If you don’t even know that a certain knowledge point exists, then you naturally won’t think of connecting it.
This is how unknown knowledge (to an individual) is connected together and what connection it has with the knowledge you are familiar with.
Now with LLM, we can explore the unknown knowledge. If you can synthesize and recognize patterns, you have an advantage.
This also correlates strongly with my impromptu presentation in the AI coding bootcamp this morning:
“
Help me make a knowledge point tool, the requirements are as follows:
Input Module:
Text input box: Allows users to paste or enter text directly.
Knowledge point display and selection module:
Recommended knowledge points list: Displays knowledge points extracted or recommended from the input text in a list or other form.
Knowledge point selection mechanism: allows users to check or select knowledge points of interest.
Knowledge point introduction display module: displays the introduction of the selected knowledge points, including analogies, metaphors and speculative guidance.
Save module: "Save" button: triggers the save operation after the user clicks it.
Knowledge base display: Provides a chat interface for answering questions from saved knowledge points.
”
I chose to build the knowledge tool from the "knowledge point" as the core , and expand the knowledge points through LLM. After selection, call MCP-Memory Server for storage and subsequent calls.
I also made an optimization for MCP-Memory Server. Now it is an exe file that supports both sse and stdio usage modes.