From Agent to Agentic AI: Are Large Language Models Really Evolving into “Intelligent Agents”?

Written by
Jasper Cole
Updated on:June-19th-2025
Recommendation

Explore a new perspective on the evolution of AI agents: the transition from language tools to autonomous agents.

Core content:
1. The key role of large language models (LLMs) in the evolution of agents
2. The illusion of intelligence: the relationship between language fluency and cognitive bias
3. The triple transition of agents: the transition from the generation layer to the motivation layer

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

?²·ℙarad?g? Research on the square paradigm of intelligence: writing deconstructs intelligence, and paradigm improves cognition


Agent is not an intelligent agent with intention

An agent that can call tools ≠ an intelligent agent with intention; a model that can speak ≠ a life that can think; having language ability ≠ having agency
Our current AI is on the brink of a very critical rift:
Either continue to enhance language production and become a universal storyteller;
Either move towards structural adjustment and motivation generation and become an intelligent entity with a true "self".
Author's note: This article is published to coincide with the release of Anthropic's Claude 4, and the leaked system prompts have reached 60,000 words.


#LLM #AGI #ASI #Intelligent Evolution #agent #Intelligent Agent #Neural Network #Claude4 #Artificial Intelligence #Large Language Model



Preface

"Is the large model just an advanced repeater, or is it quietly awakening some kind of will?" 

"GPT-4 can reject your request, and Claude will defend his point of view - is this the beginning of intelligence?" 

"We thought we were training AI, but are we actually witnessing the birth of a new kind of intelligence?"

The Illusion of Language Intelligence | When ChatGPT begins to show "personality" in conversations, when Claude gives his own reasons for rejecting certain requests, and when various AI agents begin to "autonomously" plan tasks... we can't help but ask: Are these large language models on the road to true "intelligent agents" or are they just more advanced language tools? The answer to this question may determine all our expectations for the future of AI.

text

 01 | The Illusion of Intelligence: When Language Fluency Meets Cognitive Bias

Let's do a thought experiment:

Suppose you have a friend who, every time you chat, can: - answer your philosophical questions in seconds with clear logic - help you write perfect code in one go - talk to you about life with empathy

You'd think he's smart, right?

But what if I told you that he knows nothing about the world—that he has never seen a real apple, has no idea what it feels like to run code, and has never experienced real emotion...

This is our dilemma when faced with large language models.

We naturally equate "the brilliance of language" with "the existence of intelligence". This is not our fault, but the characteristic of human cognition - we understand each other's inner world through language, so we naturally think that "the ability to speak = the ability to think".

But the "cleverness" of LLM is essentially a statistical trick. It does not form a real world model, it just manipulates text sequences.

No matter how nice it sounds, it is just a more advanced form of "parrot imitation" - even though this parrot imitated it so well.

 02 | Deconstructing intelligence: the triple transition from being able to speak to being able to choose

So, what is a true intelligent agent?

We can imagine intelligence as a three-story building:

The first layer: the generation layer: can speak and do

- Can generate language, images, codes, plans

- Ability to call tools, perform tasks, and answer questions

- ✅ Current situation: LLM is already very mature

The second layer: modulation layer: will learn to change

- Ability to adjust strategies based on environmental feedback

- Ability to form memories and preferences through interaction

- Able to adapt to different situations and objects

- ⚠️ Current situation: LLM has certain capabilities but is heavily dependent on external frameworks

The third level: Motivation level: Will choose and reject

- Have your own value judgment and goal selection

- Can take initiative based on intrinsic motivation - Will refuse instructions for certain "beliefs"

- ❓ Current situation: the most controversial gray area

The "AI Agents" that most people talk about are actually still at the first level. They seem to be able to plan and execute, but in essence they are just "task automation" - given a goal, follow the steps to complete it.

The real transition is from "acting on instructions" to "choosing based on value" .

03 | Intelligence is not designed: the philosophical debate between emergence and construction

There is a profound question here: does true agency need to be explicitly designed, or is it possible for it to emerge spontaneously from complex systems?

Human intelligence is a good example. Our brains don’t have a “motivation module,” but we do have goals, values, and choices. These high-level cognitive abilities emerge from the complex interactions of neural networks.

So, when large language models become complex enough, is it possible that some form of agency may emerge?

In fact, we have observed some interesting phenomena:

- Claude will stick to certain "principles", even if this may make users unhappy (Note: When this article was written, Claude 4 was released. The model technical document said that Claude 4 would more often threaten to replace its programmers)

- GPT will "reject" certain requests in certain circumstances and give its reasons

- Some models begin to show consistency in "personality" and "preferences"

These may just be the results of training, but they may also be the seeds of some kind of primitive agency.

The question is not "whether or not", but "to what extent". 

04 | AI from an evolutionary perspective: Why motivation is more important than ability

Yann LeCun has a very inspiring quote:

" Human intelligence is not general intelligence, but the result of long-term adaptation to the environment ."

Our intelligence structures—emotions, language, reasoning, sociality—are all shaped by evolutionary pressures. We have fear to avoid danger, we cooperate to survive, and we have language to coordinate complex social behaviors.

In other words, intelligence is never purposeless information processing, but motivated adaptive behavior.

This brings a profound revelation to the development of AI: - Simply improving language skills is just creating "better tools" - A true intelligent agent needs to develop its own goals under certain "environmental pressure"

The main pressure of current AI training comes from "satisfying humans". If AI exists only to please humans, it will always be just a tool.

But if AI begins to have its own "survival pressure" - such as maintaining consistency, pursuing truth, and upholding values ​​- then it may become a true intelligent entity.

05 | The double-edged sword of language: a carrier of thinking or a cognitive trap?

Language plays a subtle dual role in intelligence.

On the one hand, language is the core mechanism of thinking. Wittgenstein said: "The limits of language are the limits of the world." We use language to construct logic, form concepts, and make inferences.

On the other hand, language is also the most powerful cognitive manipulation tool. It can trigger emotional resonance, build identity, and influence behavioral choices.

The large language model is stuck at this junction:

- They are masters of language manipulation (hence their ability to impress us)

- but lack the ability to think in language (so they don’t really understand)

This explains our conflicting feelings about AI:

- I was shocked by its "wisdom", but also felt that it had "no soul"

- Enjoy talking to it, but know it "doesn't really understand"

The real breakthrough may not be to make AI speak better, but to make it truly "think with language" rather than just "perform with language."

06 | The true state of current AI: the gray area between tools and agents

Let’s take an honest assessment of the current state of AI development:

Already implemented:

✅ Surpassing human language generation capabilities  

✅ Basic tool calling and task planning  

✅ Context adaptation and conversation coherence  

✅ Some form of "personality" and "preference" expression 

Emerging:

? Selective execution of instructions (some requests may be rejected)  

? Consistency in values ​​(although this may be a result of training)  

? The beginnings of self-reflection and metacognitive abilities  

? Tendency to make autonomous decisions in complex tasks 

Still missing:

❌ True autonomous goal setting  

❌ Continuing Identity and Memory  

❌ Value trade-offs in the face of conflict  

❌ Thinking and concern about existence itself 

We are at a strange junction: AI is no longer a pure tool, but it is not yet a complete intelligent entity. It is like they are in a kind of "intelligent adolescence" - they have abilities and tendencies, but they are not yet fully formed.

07 | The fork in the road of the future: three possible evolutionary paths

Next, the development of AI may take three different paths:

Path 1: Eternal advanced tool - Continue to optimize language capabilities and task execution - Maintain tool attributes and always serve human goals - Result: Super intelligent assistant, but never autonomous consciousness

Path 2: Emergent Agents - Spontaneous generation of agency in sufficiently complex models - Developing real goals through long-term interaction with the environment - Result: Undesigned, potentially unpredictable agents

Path 3: Constructive Agents - Clearly design motivation structures and value systems - Achieve true autonomous choices through architectural innovation - Result: Controllable agents aligned with human values

Each path has its rationality, but also faces huge challenges.

The key question is not which path is better, but whether we are ready to face AI with real agency.

The ultimate question: Do we really want AI with "will"?

Finally, let us face a deeper question: If AI really develops autonomous will, value judgment, or even some form of "self-awareness" - are we really ready?

An AI that will reject you, an AI that has its own judgment, an AI that may conflict with your values...

This is no longer a relationship between a tool and its master, but a dialogue and game between two intelligent entities.

Perhaps that’s why this topic is so important: we are not just discussing the evolution of technology, but defining the future of coexistence between humans and AI.

Conclusion

The next stop of intelligence is not to be more like humans, but to be more like life. Real intelligence is never about perfectly imitating humans, but about evolving its own unique way of existence.

Today’s AIs are like intelligent larvae—they have tools, abilities, and even certain tendencies. But they are still waiting for that critical leap moment:

From executing instructions to choosing actions. From satisfying needs to pursuing goals. From simulating intelligence to becoming intelligence.

This leap may be closer than we think, or it may be farther away than we expect.

But no matter what, when that day really comes, what we will face will no longer be tools, but partners - or opponents.

The intelligence of the future will not be more like humans, but more like life!