Deploying "Memory": AI vendors secretly compete in the personalized track

Written by
Clara Bennett
Updated on:June-30th-2025
Recommendation

OpenAI's latest inference models o3 and o4-mini are released, ushering in a new era of personalized AI.

Core content:
1. OpenAI's new model synchronously updates the ChatGPT memory function to enhance the personalized search experience
2. User-controlled memory function, privacy protection upgrade
3. Explicit "saved memory" and implicit "chat history reference", multi-level personalized AI strategy

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


On April 17, 2025, Beijing time, OpenAI launched the powerful inference model o3 and the lightweight but efficient o4-mini  .

It is worth noting that this model release is synchronized with the update of ChatGPT's memory and search strategy, which aims to leverage user conversation history to achieve more personalized and practical search queries.

This simultaneous release is no accident; it reflects OpenAI’s strategic intent to enhance the overall user experience by combining advanced reasoning capabilities and personalized contextual understanding  .


The background of OpenAI's move is that the rapid development of large language models (LLMs) is driving human-computer interaction towards a new era of more personalized and contextualized. Major AI vendors have introduced "memory" functions, which marks the transformation of LLMs from stateless tools to intelligent partners that can establish long-term user relationships.


Breaking down Open AI’s “memory” strategy update


How memory enhances search:  OpenAI's new strategy focuses on leveraging information in the model's memory to optimize the user's search query. For example, when a user turns on the "memory" function and asks ChatGPT "What restaurants near me do I want?", if ChatGPT remembers that the user is a vegetarian and lives in San Francisco, it may rewrite the user's prompt to "good vegetarian restaurants in San Francisco". This shows that OpenAI's mechanism is able to analyze the current query and identify the user's implicit needs and preferences in combination with the user's memory information.

Additionally, these models can now reference past chats to provide more personalized responses that are based on the user’s preferences and interests  . This improved memory feature can remember specific preferences and apply them in future interactions  .



User Control and Privacy:  OpenAI provides users with control over the memory function. Users can turn the memory function on or off at any time in the settings, view and delete specific memories, or clear all memories  . For conversations that they do not want to be remembered or used for model training, users can also choose to use the "temporary chat" mode  .

ChatGPT's memory feature consists of two main aspects: "reference saved memories" (information that the user explicitly asks to be remembered) and "reference chat history" (insights gleaned from past conversations)  . Users can manage these settings independently, but turning off "reference saved memories" will also disable "reference chat history"  .

OpenAI states that ChatGPT will not actively remember sensitive information such as health unless the user explicitly requests it  . Users can ask ChatGPT at any time what information it remembers about them  . In addition, users can choose whether to allow OpenAI to use their conversations and memories to improve the model through data control options  .



OpenAI is introducing both explicit "saved memories" and implicit "chat history references"  , which embody a multi-level approach to personalization. Explicit memories provide AI with consistent context defined by the user, while chat history references allow the AI ​​to dynamically adjust based on the flow of the conversation. This dual system allows the AI ​​to both actively personalize based on the information the user tells it, and passively learn from the user's natural interactions, thereby gaining a more comprehensive and detailed understanding of the user over time.

In addition, OpenAI's emphasis on user control reflects users' growing concerns about AI data privacy and the scrutiny of regulators. By providing users with granular control over their memory data, including the option to disable entire features or delete specific memories, OpenAI aims to address potential privacy issues and build user trust.

It is worth noting that the memory feature was initially launched to paid users (Plus and Pro)  , which may be a strategy to provide advanced personalization features as a premium service. By first offering the memory feature to a more active user group, OpenAI can evaluate its impact and collect feedback before potentially expanding it to free users. This is in line with the trend of AI companies offering tiered services with different levels of functionality.



Current status of the "memory" function of mainstream large models


OpenAI is not the only company exploring LLM memory capabilities. Other major AI vendors are also actively developing and launching similar features to enhance user experience and personalized service capabilities.


Google Gemini:  Gemini is now able to recall past chats to provide more useful replies, and it will use information from related conversations to construct responses  . Users can ask Gemini to summarize previous conversations or ask about topics that were previously discussed  . Gemini also offers a "saved information" feature, where users can explicitly tell the AI ​​specific details to remember  . Users can manage their Gemini app activity, including deleting conversations or setting time periods for automatic deletion  . Google emphasizes that personal data from Gmail and other private apps will not be used to train Gemini's generative model  . Gemini's memory feature is available to both free and paid (Gemini Advanced) users, but some advanced features (such as recalling past chats) are initially limited to premium subscribers  .



Microsoft Copilot:  Copilot is introducing a new personalized memory feature that recalls details of a user's personal life across conversations, learning the user's likes, dislikes, and personal information with the user's permission  . Users can manage or delete stored data through the dashboard, and can choose to opt out of the feature entirely  . Copilot's memory is designed to understand the user's life context to provide tailored solutions and proactive suggestions  . Microsoft emphasizes privacy and security, stating that Copilot only displays organizational data that individual users are authorized to view, and does not use this data to train the underlying language model  . Users can manage their Copilot activity history in the privacy dashboard  .



Anthropic Claude:  Claude has introduced memory features, including the ability to remember preferences across sessions for its code tool Claude Code  . Claude Code offers different memory storage locations: project memory (shared by the team), project memory (local - personal), and user memory (global personal preferences)  . Anthropic has also introduced an "extended thinking mode" for Claude models, allowing the model to think more deeply about complex problems, which can be thought of as an enhanced short-term memory or processing power  . Claude retains user input and output for a period of time and gives users the option to delete the data  . Anthropic states that they do not use user input to train models by default, but there are some exceptions, such as feedback or reported content  .



The following table compares some key aspects of the "memory" capabilities of the major large models:

Functional Dimension
OpenAI ChatGPT
Google Gemini
Microsoft Copilot
Anthropic Claude
Implementation
Reference saved memories, reference chat history
Recall past chats and save information clearly
Recall personal life details and learn user preferences across conversations
Project and user level memory storage, extended thinking mode
User Controls
Turn memory on/off, view/delete specific memory, clear all memory, temporary chat
Manage app activity, delete conversations, set auto-delete, explicitly tell to remember/forget
Manage/delete stored data, completely opt out, manage activity history
Project and user level memory management, editing memory files
Privacy Policy
Do not actively memorize sensitive information, and users can choose whether to use it for model improvement
No private data such as Gmail is used to train models, and users can manage activities
Only data that users are authorized to view is displayed, not used to train the underlying model, and users can manage activities
By default, user input is not used to train the model, and users can delete data
Availability
Mainly for Plus and Pro users, free users can use saved memories
Available to both free and paid users, some advanced features are limited to paid users
For all users
Mainly for Claude Code users



These mainstream AI manufacturers are actively deploying the "memory" function, but they differ in their implementation methods and focus.

OpenAI and Google appear to be more focused on broader conversational memory, while Microsoft emphasizes personalized memory across users' digital lives, and Anthropic's memory features are more explicitly tied to specific tools (such as Claude Code). These differences suggest that different companies are developing their memory features for slightly different use cases and user needs. OpenAI and Google aim to provide a more natural and coherent conversational experience, Microsoft seeks to build a more comprehensive personal AI assistant, and Anthropic focuses on enhancing productivity in specific areas. The relatively close release dates of these memory features (from late 2024 to mid-2025) reflect the fierce competition in the AI ​​assistant market.

As LLMs become more sophisticated, the ability to remember user context is becoming a key differentiator. Companies are racing to offer this capability to attract and retain users as they recognize the importance of memory in creating more valuable and engaging AI experiences. The differences in privacy policies and user control options between platforms also highlight the ongoing debate and evolving standards around AI data processing. Users need to understand these differences in order to make informed decisions about their privacy.


The driving force and value behind the “memory” function

The introduction of the "memory" function in large language models is not accidental. There are many driving factors and potential values ​​behind it.


Improved user experience:  Memory capabilities enable AI to provide more relevant, personalized, and contextually aware responses, resulting in more natural and efficient interactions  . Users no longer need to repeat information or re-establish context in every interaction  .



Enhanced model usefulness:  By remembering users’ preferences and past interactions, LLM becomes more useful in a variety of tasks such as writing, getting advice, learning, and brainstorming  . This increases the practical value of AI assistants in everyday life and professional workflows  .



Build user stickiness:  AI assistants with memory capabilities can foster a stronger sense of connection and personalized experiences, making users more likely to use them for the long term  . This creates a more “partner-like” experience  .



Competitive advantage in the personalized service track:  In the rapidly developing field of AI, providing complex memory functions can bring significant competitive advantages to AI vendors  . Personalization is becoming a key battlefield to attract and retain users  .



Improved efficiency:  Memory can help AI systems avoid redundant computations and provide more direct and efficient solutions based on past knowledge  .


 The development and implementation of memory capabilities in LLMs is being directly driven. Users increasingly expect technology to adapt to their individual needs and preferences. This expectation is driving AI developers to incorporate memory capabilities that enable LLMs to learn and personalize interactions over time. Competition in the personalization space shows that AI vendors are recognizing the limitations of general-purpose, stateless models and are viewing memory as a key differentiator for future success. As the AI ​​assistant market matures, companies are looking for ways to stand out. Memory presents a significant opportunity to deliver a more compelling and valuable user experience, potentially leading to market share gains for companies that implement the feature effectively.



Opportunities and challenges brought by the “memory” function

While the "Memory" function brings many conveniences to users, it also comes with some challenges that need to be taken seriously.


opportunity:

  • More precise search results:
     Leveraging memory capabilities can enable AI to more accurately understand user intent, resulting in more relevant and efficient search results [User Query].
  • Smoother conversation experience:
     The memory feature makes conversations more natural and coherent, reduces the need for repeated explanations, and improves the flow of interactions  .
  • Enhances Creativity and Brainstorming:
     AI that can remember past ideas and preferences could be a more effective partner in creative tasks and brainstorming sessions  .
  • Personalized Learning and Tutoring:
     Memory capabilities could enable AI to tailor educational content and adjust teaching styles based on an individual’s learning needs and progress  .
  • Simplify task automation:
     By remembering user workflows and preferences, AI can automate tasks more efficiently and anticipate user needs  .

challenge:

  • Privacy concerns:
     Storing user conversation histories and personal information raises significant privacy concerns about data security, potential misuse, and unauthorized access  . Users may be reluctant to share sensitive information with an AI that remembers the information  .
  • Data security risks:
     Storing large amounts of personal data in AI systems makes them a potential target for cyberattacks and data breaches  .
  • Algorithmic Bias:
     If an AI’s memory reinforces biases present in its training data or user interactions, it could lead to unfair or discriminatory outcomes  .
  • Data Management Complexity:
     Managing and organizing user-specific memories at scale poses significant technical challenges in terms of storage, retrieval, and processing  .
  • Potential for “false memories” or inaccuracies:
     AI systems may misinterpret or remember information, resulting in inaccurate or nonsensical responses  .
  • User Trust and Transparency:
     Users need to trust that their memory data is handled responsibly and transparently. Lack of clarity about how memories are used can erode user trust  .

The benefits of memory in terms of personalization and efficiency are contrasted with significant challenges related to privacy and security, and AI vendors must carefully navigate this tension. While users appreciate the convenience of personalized AI, they are also increasingly concerned about the privacy implications of sharing data. AI companies need to find a balance between providing memory capabilities and ensuring strong privacy protections to maintain user trust. The increase in memory capacity and usage in LLM may directly drive the demand for more advanced memory chip technology  .

As AI models become larger and more reliant on memory for personalization and contextual understanding, the need for high-bandwidth and efficient memory solutions will grow, impacting the memory industry. Ethical considerations around AI memory, such as bias and the potential for misuse of personal information, will require continued attention, as well as the development of best practices and regulations. As AI becomes more deeply integrated into our lives, addressing the ethical implications of capabilities such as memory will be critical to ensure these technologies are used responsibly and for the benefit of society.



Expert perspective: AI personalization trend and the status of "memory"


Experts generally agree that hyper-personalization is a major trend in AI for 2025 and beyond  . AI models are becoming increasingly powerful in processing large amounts of data to understand consumer behavior and customize experiences  . Memory is seen as a key factor in achieving this hyper-personalization, enabling AI to go beyond surface-level interactions and build deeper, more meaningful relationships with users  .



AI ethicists such as Dr. Emily Chen have expressed concerns about the privacy implications of widespread memory and personalization, especially when data from social media platforms is accessed without explicit consent  . Some experts have highlighted the potential for AI companions with memory capabilities to provide valuable support in areas such as memory care, providing personalized interactions and insights  .


Other experts stressed that while AI can enhance personalization, human oversight and expertise remain critical, especially in terms of legal considerations and ensuring ethical use  . Mustafa Suleyman believes that persistent memory is a key milestone for the next generation of AI assistants, bridging the gap between machine intelligence and the ability to act on behalf of users  .



Experts mostly acknowledge that AI personalization is an important and accelerating trend, in which memory plays a central role. However, this enthusiasm is tempered by ethical concerns, especially around privacy and data control. They also warn of the risks of unchecked data collection and erosion of user privacy, emphasizing the need for responsible development and deployment.

While memory is considered necessary for deeper personalization, some experts point to current AI limitations in truly understanding and utilizing memory in the same way that humans do, suggesting further advances are needed  .


Although AI can now store and recall information, its ability to connect memories, make inferences, and flexibly apply memories in new situations still lags behind human cognition. This suggests that "memory" in AI is currently more like advanced data storage and retrieval than true understanding and recall.

Expert opinions emphasize the multidisciplinary nature of AI development, which requires collaboration between engineers, ethicists, and social scientists to ensure that advances in personalization are technically feasible and ethical. Building AI with memory capabilities involves not only technical challenges, but also complex ethical considerations related to privacy, bias, and potential social impacts. Addressing these issues requires a holistic approach that integrates insights from various fields.



Future Outlook: Development Direction of LLM "Memory" and Personalization


Future LLMs may build more sophisticated user profiles by integrating data from a variety of sources and interactions  . We can expect smarter contextual understanding, where AI can not only remember past interactions but also better understand the nuances and intent behind them  . LLMs may be able to provide more proactive service recommendations based on a long-term understanding of user needs and preferences  . The future may allow users to personalize the appearance and personality of their AI companions  . The development of persistent AI memory will bring many advantages across industries, including improved decision-making, reduced computational overhead, and enhanced user experience  . Researchers are exploring ways to improve the memory efficiency and scalability of LLMs, possibly through technologies such as memory networks and external knowledge bases  . The integration of different types of memory (such as short-term and long-term memory, episodic memory, and semantic memory) may produce more complex AI systems  .



The future of LLM memory and personalization is likely to involve a more seamless and intuitive integration into users’ lives, with AI proactively anticipating needs and providing tailored support. As AI models become better at understanding and remembering user context, they will be able to provide assistance that feels more natural and less transactional. This could lead to AI becoming a more integral part of everyday tasks and decision-making processes. The development of AI memory is borrowing some aspects of human memory, with distinctions being made between different types of memory and their functions  . Researchers are drawing inspiration from cognitive science to design more sophisticated memory architectures for AI that aim to replicate the efficiency and flexibility of human memory in understanding and responding to the world. As AI becomes increasingly adept at understanding and remembering our preferences, there is a risk of reinforcing our existing biases and limiting our exposure to different perspectives. This highlights the need to carefully consider the social implications of advanced personalization.



"Memory" empowers AI, and personalized competition is upgraded

The introduction of memory in large language models marks a significant advance in the pursuit of more personalized and contextually aware AI.

OpenAI’s latest strategy, combining powerful new models with enhanced memory and search capabilities, highlights the importance of personalization in the future of AI. While different vendors such as Google, Microsoft, and Anthropic are also actively developing their own memory solutions, each with a unique approach and focus, the underlying trend is clear: memory is becoming a key differentiator in the field of AI. The advantages of memory, such as improved user experience, enhanced model utility, and more precise search results, are very compelling. However, significant challenges related to privacy, security, and ethical considerations must also be addressed.

As AI continues to advance, we can expect more sophisticated memory and personalization capabilities, resulting in AI assistants that are more intuitive, proactive, and more deeply integrated into our lives.

The “secret war” for personalization is intensifying, and memory is a key weapon in this battle. The development of memory in AI is a double-edged sword, offering great potential for enhancing user experience and utility, but also posing significant risks to privacy and data security. The future of personalized AI will depend on how these opportunities and challenges are effectively managed.