How to Build AI Agents with LangGraph: A Practical Guide
AI agents are transforming how we build software. Instead of writing rigid, rule-based automation scripts, we can now build systems that reason, plan, and adapt to unexpected situations. LangGraph, built on top of LangChain, provides the most powerful framework for building these agentic workflows.
In this guide, I'll walk you through building production-ready AI agents using LangGraph — the same patterns I've used to build security automation agents at scale.
What is LangGraph?
LangGraph is a framework for building stateful, multi-actor applications with LLMs. Unlike simple chain-based approaches, LangGraph uses a directed graph architecture where:
- Nodes represent computation steps (LLM calls, tool executions, decisions)
- Edges represent transitions between steps
- State is passed between nodes and persisted across interactions
This graph-based approach enables cycles — meaning agents can loop, retry, and self-correct — which is impossible with simple sequential chains.
Why LangGraph Over Other Frameworks?
| Feature | LangGraph | Simple Chains | CrewAI |
|---|---|---|---|
| Cyclic flows | ✅ | ❌ | ⚠️ Limited |
| State management | ✅ Built-in | ❌ Manual | ⚠️ Basic |
| Human-in-the-loop | ✅ Native | ❌ | ❌ |
| Streaming | ✅ Token-level | ⚠️ Limited | ❌ |
| Debugging | ✅ LangSmith | ❌ | ❌ |
| Production-ready | ✅ | ❌ | ⚠️ |
Building Your First Agent
Let's build a research agent that can search the web, analyze results, and produce a structured report.
Step 1: Define the State
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
import operator
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
research_topic: str
findings: list[str]
final_report: str
Step 2: Create the Tools
from langchain_core.tools import tool
@tool
def web_search(query: str) -> str:
"""Search the web for information about a topic."""
# In production, use Tavily, SerpAPI, or similar
return f"Results for: {query}"
@tool
def analyze_content(content: str) -> str:
"""Analyze and extract key insights from content."""
return f"Key insights from: {content}"
Step 3: Build the Graph
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Define nodes
def research_node(state: AgentState):
"""Perform research using tools."""
response = llm.invoke(state["messages"])
return {"messages": [response]}
def analyze_node(state: AgentState):
"""Analyze gathered research."""
# Process findings and generate insights
return {"findings": ["insight1", "insight2"]}
def report_node(state: AgentState):
"""Generate final report."""
return {"final_report": "Comprehensive research report..."}
# Define the routing logic
def should_continue(state: AgentState):
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return "analyze"
# Build the graph
workflow = StateGraph(AgentState)
workflow.add_node("research", research_node)
workflow.add_node("analyze", analyze_node)
workflow.add_node("report", report_node)
workflow.set_entry_point("research")
workflow.add_conditional_edges("research", should_continue)
workflow.add_edge("analyze", "report")
workflow.add_edge("report", END)
app = workflow.compile()
Step 4: Run the Agent
result = app.invoke({
"messages": [("user", "Research the latest trends in AI security")],
"research_topic": "AI security trends",
"findings": [],
"final_report": ""
})
print(result["final_report"])
Production Patterns I Use Daily
Pattern 1: Supervisor Architecture
For complex workflows, use a supervisor agent that delegates to specialized sub-agents:
def supervisor_node(state):
"""Route to the appropriate specialist agent."""
decision = llm.invoke(
"Based on the current state, which agent should handle this? "
"Options: researcher, analyzer, writer"
)
return {"next_agent": decision.content}
Pattern 2: Error Recovery with Retry Loops
def should_retry(state):
if state.get("error_count", 0) < 3:
return "retry"
return "fallback"
workflow.add_conditional_edges("execute", should_retry)
Pattern 3: Human-in-the-Loop Approval
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = workflow.compile(
checkpointer=checkpointer,
interrupt_before=["critical_action"] # Pause before risky operations
)
Building AI agents at the intersection of security and automation. Follow my work on GitHub.