A discussion about AI programming/Agent

Written by
Clara Bennett
Updated on:June-16th-2025
Recommendation

In-depth discussion of the technological changes and the evolution of front-end programming in the AI ​​era.

Core content:
1. The technological changes from the rise of the Internet in the 1990s to the blossoming of AI applications
2. The development of the front-end technology stack and a review of personal programming experience
3. Exploration of the combination of front-end frameworks based on functional programming and large models

 
Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

TL;DR

Yesterday I saw some front-end discussions in a group, and recalled LAMP, php, and jquery. It seemed like I was back in 1998... When I was a kid, my uncle wanted to do CAD, and my aunt wanted to trade stocks. In 1997, my family had a computer with a 33.6K modem, so I was exposed to the Internet very early... In fact, even earlier (1993), I started learning LOGO/BASIC programming in elementary school in response to the slogan "learning computers from an early age", and then I started playing OI competitions in junior high school... By chance, I was in that wave of Internet bubble, and then I was a classmate of Qiyuan, the "No. 1 Internet kid in China" at that time, so of course I had the urge to write web pages and become a webmaster...

That era was very similar to the current AI application boom. Everyone was trying various applications in the face of new technological changes. I remember very clearly that in 1999 there was a 72-hour Internet survival test [1] to verify the Internet's ability to touch life and online shopping. I vaguely remember that there were many web page design competitions in that era, and various explorations of Internet applications were born. Coincidentally, Alibaba was also established around that year. It is not difficult to understand that as witnesses of that era, Mr. Ma/Mr. Wu's judgment and determination in the AI ​​era...

Today's MCP seems to be the same as CGI back then. It seems to open a universal execution interface, but it also brings many security risks. With the change of the times, we will gradually discover the ugliness and imperfections of MCP... In the past 20 years of the Internet, the front-end/back-end technology stack has also evolved for many years, so I want to review and discuss to avoid taking some detours in the AI ​​era...

Therefore, some front-end frameworks based on functional programming, such as the combination of React+Redux and large models, may be a more appropriate way to improve MCP.

 

1. Some front-end memory

The journey of front-end development probably started in 1997 when I used Frontpage to write static pages, and then used PWS on Windows 95 to provide services. Later, Perl CGI was gradually used to provide services for dynamic content. The most common applications at that time were chat room/message board applications. Of course, there were many tricks to beautify web pages using javascript. During that time, I was mainly working on a financial website, which was closed when the Internet bubble burst... Some of the text is recorded in an old article  "30 Years of Internet, Some Memories"

Later, I followed some friends in college and wrote some small projects with PHP. After work, I used Perl CGI for a long time to build some internal performance test frameworks and provide some customized network management systems for some centralized procurement projects. Then my boss arranged for me to build some big data platforms for Telemetry analysis. My colleagues worked on the backend, and I worked on the frontend and marketing. I wrote MySQL, PHP, and jQuery for two years...

Later, front-end MVC became popular, and we started to use Node.js to build a transaction monitoring and delivery order management system for quantitative private equity funds. Then we gradually replaced Express.js with Koa2.js for a long time. For the front-end framework, Cisco wanted to use Angular internally, but I personally hated it, so I used React. Some components in the web used Antd. The front-end and back-end interactions were gradually changed from RestAPI to GraphQL. Then we chose to use redux to handle some states of front-end operations. The use of functional programming seems to be a good choice for front-end execution of complex state machines and simplifying back-end interactions.

Then we need to provide web services on some embedded router platforms. Considering the resource constraints, we re-wrote some backend functions in golang and integrated them with the stream data processing platform...

This is probably some of my experiences with front-end projects in the past 20 years. From the early CGI to the later React+Redux+GraphQL+Golang architecture, I have experienced a lot. But in recent years, I haven’t paid much attention to front-end things... I just recently had some ideas on MCP and some LLM code generation tasks. How to adapt large models to front-end frameworks in the AI ​​era? On the other hand, how to make LLM generate more effective code?

2. LLM interaction from the perspective of MVC

The front-end MVC framework is already very mature, but it seems that some of the codes generated by LLM still have a lot of deficiencies. In essence, the front-end in the AI ​​era is the interaction between people and View components, which has become a more direct interaction between people and Controller. Under the logic of the autoregressive Token generation of the model itself, there is an idea:

Perhaps because of some previous experience with React+Redux, I personally think that front-end MVC based on some functional programming combined with LLM execution is a good choice.

For large models, the autoregressive model can generate a relatively stable state machine change chain. For web-side operations, due to the constraints of functional programming itself, there will be no side effects. Multiple executions and attempts under model control are more certain. On the other hand, the pattern matching mechanism of functional programming itself is easier to match with model output.

3. Advantages of combining large models with functional programming

Advantages of combining large models with functional programming:

  1. Code predictability: The explicit input/output relationships of pure functions make large models easier to understand and generate correct code for.
  2. Declarative expression: The declarative nature of functional programming is closer to natural language expression, making it easier for large models to understand requirements.
  3. Immutable data: simplifies state tracking, allowing large models to more accurately reason about state changes
  4. Component composition: Modularity and compositionality enable large models to build and understand complex interfaces
  5. One-way data flow: Provides a clear logical path to help large models track data flow
  6. High-level abstraction: enables large models to work at different levels of abstraction, providing more flexible solutions
  7. Test-friendly: The testing characteristics of functional code enable large models to generate high-quality, verifiable code

This combination not only improves development efficiency, but also improves code quality, maintainability, and reduces the probability of errors, representing a powerful paradigm for software development.

3.1 Code Predictability and Reasoning Ability

Features of functional programming: Emphasis on pure functions - the same input produces the same output, without side effects. Redux's reducer is a typical application of pure functions. Large models can easily understand and generate this functional code because: the input/output relationship is clear, the logical flow is clear, and there is no hidden state, which makes model reasoning more accurate - the state transition logic follows a consistent pattern.

3.2 Compatibility of declarative programming with natural language

Functional programming features: describing "what to do" rather than "how to do it", React's JSX is a typical declarative UI expression, and its advantages combined with large models are:

  • Declarative code is closer to natural language description
  • Large models make it easier to understand UI structure and component hierarchy
  • The conversion from requirement description to code implementation is more direct

3.3 Immutable Data and State Tracking

Functional programming features:

  • Avoid modifying existing data, create new data instead
  • Redux enforces immutable update patterns

Large models can:

  • Track state changes more accurately
  • Easier to understand data flow and transformation
  • Generate code that complies with the principle of immutability and avoids common side effects

3.4 Component Combination and Code Generation

Functional programming features:

  • Build complex functions by combining small functions
  • React components are UI functions that can be combined to form complex interfaces

Advantages of combining with large models:

  • Understand the relationships between components and data flow
  • Generate component tree structure from high-level requirements
  • Separate concerns based on component responsibilities

3.5 One-way Data Flow and Logical Reasoning

Functional programming features:

Redux implements a strict one-way data flow Action → Reducer → State → View → Action

Large model advantages:

  • Ability to understand and generate code that conforms to unidirectional data flow
  • Can infer the complete path from user interaction to state change
  • Ability to trace logic chains when debugging problems

3.6 Higher-order functions and levels of abstraction

Functional programming features:

  • Creating abstractions using higher-order functions
  • Redux middleware and React high-order components are both applications of high-order functions

Large models can:

  • Understanding code at different levels of abstraction
  • Generates higher-order functions that conform to a certain pattern
  • Helps achieve code reuse and separation of concerns

3.7 Test-friendliness and quality assurance

Functional programming features:

  • Pure functions are easy to test, just verify the input/output
  • Components and Redux logic can be tested independently

Large models can:

  • Generate test cases and expected results
  • Understand why tests fail and provide recommendations for fixes
  • Maintaining consistency between tests and implementation