I thought Google was just releasing a demo, but it turned out to be the “open source version” of Perplexity!

Written by
Silas Grey
Updated on:June-13th-2025
Recommendation

Google has open-sourced the full-stack intelligent research assistant Gemini Fullstack LangGraph, which can automatically generate search terms, retrieve information, identify knowledge blind spots, and output authoritative reference answers.

Core content:
1. Gemini Fullstack LangGraph project introduction and integrated components
2. LangGraph intelligent agent's "reflection-completion-re-reflection" cycle mechanism
3. Basic use and core concepts of LangGraph tool

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

I thought Google was just putting up a demo, but I never thought that the "open source version of Perplexity"

Google recently open-sourced a full-stack project called Gemini Fullstack LangGraph.

This project is a full-stack intelligent research assistant that integrates the Google Gemini 2.5 large model and the LangGraph framework.

React on the front end, FastAPI and LangGraph on the back end, to achieve “research-enhanced” conversational AI.

After the question is raised, the search terms are automatically generated, the Google Search API is called to retrieve information, the big model is used to reflect on the results, identify knowledge blind spots, continuously complete them, and finally output answers with authoritative references.

The core of the entire architecture is the "LangGraph Agent".

It can not only automatically generate multiple sets of search terms based on user input, but also analyze the results of each round of retrieval to determine whether there are any knowledge blind spots.

If the information is not sufficient, the agent will automatically generate more precise search terms for the next round until enough information is collected to support the answer.

It forms a cycle of " reflection - completion - reflection again ", which is much better than the conventional so-called agents that can only crawl and roughly summarize.

The project supports one-click deployment of Docker and docker-compose.

For individuals or companies who want to do similar deep thinking, it is completely necessary to start from scratch and then put it into operation directly after making some changes.

There is a docker-compose.yml in the root directory of the project repository, which can pull up the entire project with one click:

GEMINI_API_KEY=<your_gemini_api_key> LANGSMITH_API_KEY=<your_langsmith_api_key> docker-compose up

After running

Search the entire web with the question "When will Grok 3.5 be released?"

Execution process For each query, it uses the Gemini model and the Google Search API to find relevant web pages.

Officially, if gaps or insufficient information are found, it generates follow-up queries and repeats the network research and reflection steps, up to the configured maximum number of cycles.

Once the research is deemed sufficient, the agent synthesizes the collected information into a coherent answer, including citations from web sources, using the Gemini model.

The test results and time are accurate and the quoted content is relatively authoritative and reliable, without any random answers.

The last time ByteDance open-sourced DeepSearch, it also used Langgraph. The fact that two major companies adopted this technology at the same time must have its advantages. The following is a brief demonstration of the basic usage of Langgraph.

LangGraph is a tool launched by the LangChain team that is specifically used to orchestrate AI workflows. Simply put, it uses code to draw flowcharts , allowing multiple AI models, tools, and logical steps to be executed in sequence or conditionally to complete complex tasks.

for example:

-First let GPT-4 analyze the user's question,
-then use the search engine to search for information based on the results,
-finally use Claude 3 to summarize the answer.

To implement the above logic using ordinary code may be long and tedious.

LangGraph connects these steps like building blocks and can also handle complex logic such as loops, branches, and parallelism.

Core concepts:

  • Node: A step (such as calling an AI, checking a database, or judging a condition).
  • Edge: An arrow that determines where to go next (e.g. “if the user curses, end the conversation”).
  • State: A shared data package that can be read and written by all steps (such as user questions and intermediate results of AI).

The process of adding nodes is just like connecting lines in the interface, connecting data and functions in series:

graph.add_node("analyze", analyze_user_input) # Node 1: Analyze user input graph.add_node("search", call_search_api) # Node 2: Search graph.add_edge("analyze", "search") # Arrow: Search after analysis

Full concept version

from langgraph.graph import StateGraph, START, END from typing import TypedDict # Define state - shared data structure class State(TypedDict): messages: list[str] # Define node - processing function def say_hello(state: State): print("Hello World!") return {"messages": state["messages"] + ["Hello"]} def say_goodbye(state: State): print("Goodbye World!") return {"messages": state["messages"] + ["Goodbye"]} # Construct graph graph = StateGraph(State) graph.add_node("hello", say_hello) graph.add_node("goodbye", say_goodbye) # Add edges - define process graph.add_edge(START, "hello") graph.add_edge("hello", "goodbye") graph.add_edge("goodbye", END) # Compile and execute app = graph.compile() result = app.invoke({"messages": []}) print(result) # {'messages': ['Hello', 'Goodbye']}

It is very easy to build AI workflows using LangGraph.

First, define the data State, then define the node that processes the data, that is, the processing function, then connect the various nodes by building a graph, and finally splice the results on this pipeline.

The implementation is very elegant and efficient, and feels like a code version of Coze.