F
Farhan Khan
Guest
LangGraph is a framework for building stateful, reliable agent workflows. Instead of executing a single prompt-response cycle, LangGraph enables agents to operate as directed graphs, where each node represents a step, state is shared across the workflow, and execution can branch, pause, or persist.
This article summarizes the core concepts, prebuilt utilities, and typical applications of LangGraph.
Consider a simple graph with three nodes:
Initial Input
After plan node
After search node
After answer node
Input State
Output Update
After merging
Nodes are modular steps. They can be simple (string formatting) or advanced (LLM + tools). Together, they form the workflow.
Streaming provides live feedback during execution.
Output
Memory in LangGraph = state + checkpointers.
Example after two turns:
Both runs share the same
Unlike LangChain, LangGraph doesnβt treat memory as a separate object β itβs baked into state + checkpointers.
Continue reading...
This article summarizes the core concepts, prebuilt utilities, and typical applications of LangGraph.
State
- The memory object that flows through the graph.
- Nodes read from and write updates to state.
Managed through channels that define merge strategies:
- Replace β overwrite existing values.
- Append β add items (e.g., chat history).
- Merge β combine dicts/lists.
Acts as the single source of truth for the agentβs execution.
Example: How State Evolves
Consider a simple graph with three nodes:
plan β search β answer
.
Code:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class State(TypedDict, total=False):
question: str
plan: str
results: list[str]
final_answer: str
def plan_node(state: State) -> State:
return {"plan": f"Search for: {state['question']}"}
def search_node(state: State) -> State:
return {"results": [f"Result about {state['plan']}"]}
def answer_node(state: State) -> State:
return {"final_answer": f"Based on {state['results'][0]}, here is the answer."}
Initial Input
Code:
{"question": "What is LangGraph?"}
After plan node
Code:
{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?"}
After search node
Code:
{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?", "results": ["Result about Search for: What is LangGraph?"]}
After answer node
Code:
{"question": "What is LangGraph?", "plan": "Search for: What is LangGraph?", "results": ["Result about Search for: What is LangGraph?"], "final_answer": "Based on Result about Search for: What is LangGraph?, here is the answer."}
Nodes
Nodes are the functional units of a LangGraph.
Each node is a Python function that:
- Takes the current
state
(a dictionary-like object). - Returns updates to that state (usually as another dictionary).
- Takes the current
A node can perform many different actions depending on the workflow:
- Invoke an LLM to generate text, plans, or summaries.
- Call external tools or APIs (e.g., search engine, database, calculator).
- Execute deterministic logic (e.g., scoring, validation, formatting).
Nodes donβt overwrite the whole state by default β instead, they return partial updates that LangGraph merges into the global state using channels.
A node is not an βagentβ by itself. The entire graph of nodes forms the agent.
Example: Simple Node
Code:
from typing import TypedDict
class State(TypedDict, total=False):
question: str
plan: str
def plan_node(state: State) -> State:
q = state["question"]
return {"plan": f"Search online for: {q}"}
Input State
Code:
{"question": "What is LangGraph?"}
Output Update
Code:
{"plan": "Search online for: What is LangGraph?"}
After merging
Code:
{"question": "What is LangGraph?", "plan": "Search online for: What is LangGraph?"}
Example: Node Invoking an LLM
Code:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
def answer_node(state: State) -> State:
response = llm.invoke(state["question"])
return {"final_answer": response.content}

Edges
Edges define the flow of execution between nodes.
Normal edges β fixed transitions.
Conditional edges β branching logic, using a router function.
Special markers:
- START β entry point.
- END β exit point.
Example: Normal Edges
Code:
from langgraph.graph import StateGraph, START, END
graph = StateGraph(State)
graph.add_node("plan", plan_node)
graph.add_node("search", search_node)
graph.add_node("answer", answer_node)
graph.add_edge(START, "plan")
graph.add_edge("plan", "search")
graph.add_edge("search", "answer")
graph.add_edge("answer", END)
Example: Conditional Edges with Router
Code:
def router(state: State) -> str:
q = state["question"]
if "latest" in q.lower():
return "search"
else:
return "answer"
graph.add_conditional_edges(
"plan", router, {"search": "search", "answer": "answer"}
)
- Input: "What is the capital of France?" β routed to
answer
. - Input: "What are the latest news on AAPL?" β routed to
search
.
Streaming
Streaming provides live feedback during execution.
Update Stream (node-level)
Code:
for event in app.stream(inputs, config=config, stream_mode="updates"):
print(event)
Output
Code:
{'plan': {'plan': 'Search for: What is LangGraph?'}}
{'search': {'results': ['Result about What is LangGraph?']}}
{'answer': {'final_answer': '...final text...'}}
Token Stream (LLM output)
Code:
for chunk in app.stream(inputs, config=config, stream_mode="messages"):
print(chunk, end="", flush=True)
stream_mode="updates"
β node updates.stream_mode="messages"
β token stream (if LLM supports it).
Memory
Memory in LangGraph = state + checkpointers.
Short-Term Memory (Within a Thread)
Code:
from typing import TypedDict, List
class State(TypedDict, total=False):
question: str
chat_history: List[str]
answer: str
def add_to_history(state: State) -> State:
history = state.get("chat_history", [])
history.append(state["question"])
return {"chat_history": history}
Example after two turns:
Code:
{"chat_history": ["What is LangGraph?", "Explain checkpointers"], "answer": "..."}
Long-Term Memory (Across Runs)
Code:
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)
app.invoke({"question": "What is LangGraph?"}, config={"configurable": {"thread_id": "t1"}})
app.invoke({"question": "And what are edges?"}, config={"configurable": {"thread_id": "t1"}})
Both runs share the same
thread_id
β context is preserved.
Combined
- Short-term memory = within a run.
- Long-term memory = across runs.
- Together β continuity + reliability.

Continue reading...