LangChain & LangGraph
LangChain provides composable primitives (chains, tools, memory, LCEL) for LLM applications; LangGraph extends it with a stateful, graph-based runtime for building cyclic multi-agent workflows with persistence, human-in-the-loop, and production-grade observability via LangSmith.
LangChain provides composable primitives (chains, tools, memory, LCEL) for LLM applications; LangGraph extends it with a stateful, graph-based runtime for building cyclic multi-agent workflows with persistence, human-in-the-loop, and production-grade observability via LangSmith.
LangChain Core
LangChain is the most widely adopted LLM application framework. It started as a monolithic library but has been refactored into focused packages: langchain-core (primitives), langchain (chains/agents), and provider-specific packages (langchain-openai, langchain-anthropic, etc.).
LCEL (LangChain Expression Language)
LCEL is the declarative composition syntax that replaced the legacy LLMChain approach. It uses pipe operators to compose runnables.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_anthropic import ChatAnthropic
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant specializing in {domain}."),
("human", "{question}")
])
model = ChatAnthropic(model="claude-sonnet-4-20250514")
parser = StrOutputParser()
# LCEL chain — each component is a Runnable
chain = prompt | model | parser
# Invoke synchronously
result = chain.invoke({"domain": "AI architecture", "question": "Explain RAG"})
# Stream tokens
for chunk in chain.stream({"domain": "AI", "question": "Explain RAG"}):
print(chunk, end="")
# Batch multiple inputs
results = chain.batch([
{"domain": "AI", "question": "What is RAG?"},
{"domain": "DevOps", "question": "Explain GitOps"},
])
Why LCEL matters: Every LCEL chain automatically gets .invoke(), .stream(), .batch(), .ainvoke() (async), and built-in retry/fallback support. This is the composability contract.
Tool Use
1
2
3
4
5
6
7
8
9
10
11
12
13
from langchain_core.tools import tool
@tool
def search_products(query: str, category: str = "all") -> str:
"""Search the product catalog. Use this when a user asks about products."""
# actual implementation
return f"Found 3 results for '{query}' in {category}"
# Bind tools to a model
model_with_tools = model.bind_tools([search_products])
# The model decides whether to call the tool based on the conversation
response = model_with_tools.invoke("Find me wireless headphones under 100 euros")
Memory and Chat History
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
store = {}
def get_session_history(session_id: str):
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
chain_with_history = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="question",
)
# Each invocation automatically loads/saves history
chain_with_history.invoke(
{"domain": "AI", "question": "What is an agent?"},
config={"configurable": {"session_id": "user-123"}}
)
LangGraph
LangGraph is the production runtime for stateful, multi-step agent workflows. It models computation as a directed graph where nodes are functions and edges define control flow – including cycles, conditionals, and parallel branches.
Why LangGraph Over Plain LangChain Agents?
LangChain’s AgentExecutor was the original agent loop: call LLM, parse tool calls, execute tools, repeat. It works for simple cases but breaks down when you need:
- Cycles and loops (retry logic, iterative refinement)
- Multiple agents collaborating with shared or separate state
- Persistent checkpoints (resume from where you left off)
- Human-in-the-loop (pause, review, approve, then continue)
- Streaming intermediate steps in production
LangGraph replaces AgentExecutor for all non-trivial agent workloads.
Architecture
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──────────────┐
│ StateGraph │
│ (typed dict) │
└──────┬───────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
┌────────┐ ┌─────────┐ ┌──────────┐
│ Node A │ │ Node B │ │ Node C │
│(agent) │ │ (tool) │ │(reviewer)│
└───┬────┘ └────┬────┘ └─────┬────┘
│ │ │
└──────┬─────┘ │
▼ │
┌──────────────┐ │
│ Conditional │────────────┘
│ Edge │
└──────────────┘
Core Example: ReAct Agent with LangGraph
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-sonnet-4-20250514")
tools = [search_products]
model_with_tools = model.bind_tools(tools)
def call_model(state: MessagesState):
response = model_with_tools.invoke(state["messages"])
return {"messages": [response]}
def should_continue(state: MessagesState):
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return END
# Build graph
graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
graph.add_edge("tools", "agent") # cycle back after tool execution
app = graph.compile()
# Run
result = app.invoke({"messages": [("user", "Find wireless headphones")]})
Persistence and Checkpointing
1
2
3
4
5
6
7
8
9
10
11
12
13
from langgraph.checkpoint.postgres import PostgresSaver
# Production: use Postgres for durable checkpoints
checkpointer = PostgresSaver.from_conn_string("postgresql://...")
app = graph.compile(checkpointer=checkpointer)
# Every step is checkpointed — crash-safe, resumable
config = {"configurable": {"thread_id": "conversation-42"}}
result = app.invoke({"messages": [("user", "Find headphones")]}, config=config)
# Later: resume the same thread
result = app.invoke({"messages": [("user", "Under 50 euros")]}, config=config)
Human-in-the-Loop
1
2
3
4
5
6
7
8
9
10
11
12
13
14
from langgraph.graph import StateGraph, MessagesState
# Add an interrupt before a critical node
app = graph.compile(
checkpointer=checkpointer,
interrupt_before=["execute_action"] # pause here for human review
)
# First run: stops at the interrupt point
result = app.invoke({"messages": [("user", "Delete all old records")]}, config=config)
# State is checkpointed. Human reviews the pending action.
# Resume after approval
result = app.invoke(None, config=config) # continues from checkpoint
Multi-Agent Patterns
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
from langgraph.graph import StateGraph
from typing import TypedDict, Annotated
from operator import add
class TeamState(TypedDict):
messages: Annotated[list, add]
next_agent: str
def researcher(state: TeamState):
# Research agent logic
return {"messages": [...], "next_agent": "writer"}
def writer(state: TeamState):
# Writing agent logic
return {"messages": [...], "next_agent": "reviewer"}
def reviewer(state: TeamState):
# Review and decide: approve or send back
return {"messages": [...], "next_agent": "end" if approved else "writer"}
def route(state: TeamState):
return state["next_agent"]
graph = StateGraph(TeamState)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)
graph.add_node("reviewer", reviewer)
graph.add_edge(START, "researcher")
graph.add_conditional_edges("researcher", route)
graph.add_conditional_edges("writer", route)
graph.add_conditional_edges("reviewer", route)
LangSmith: Observability and Evaluation
LangSmith is the companion platform for tracing, debugging, evaluating, and monitoring LLM applications.
| Capability | Description |
|---|---|
| Tracing | Automatic capture of every LLM call, tool invocation, and chain step with latency/cost |
| Datasets | Create labeled datasets from production traces for regression testing |
| Evaluators | Built-in + custom evaluators (correctness, faithfulness, relevance) |
| Monitoring | Production dashboards: latency percentiles, error rates, cost tracking |
| Playground | Test prompts interactively against traces |
1
2
3
4
5
6
7
# Enable tracing (just set env vars)
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "ls__..."
os.environ["LANGCHAIN_PROJECT"] = "mms-ai-platform"
# All LangChain/LangGraph calls are now traced automatically
Key Properties
| Property | LangChain | LangGraph |
|---|---|---|
| Primary use | LLM app building blocks | Stateful agent orchestration |
| Execution model | Linear chains / DAGs | Cyclic graphs with state |
| State management | Manual (memory classes) | Built-in typed state + checkpointing |
| Multi-agent | Basic (agent handoff) | Native (shared state, routing, supervisors) |
| Human-in-the-loop | Not built-in | First-class (interrupt/resume) |
| Persistence | External (you build it) | Built-in (SQLite, Postgres, Redis) |
| Streaming | Token-level | Token + intermediate step streaming |
| Observability | LangSmith integration | LangSmith integration |
| Learning curve | Moderate | Steep (graph thinking required) |
| License | MIT | MIT |
When to Use
Use LangChain (without LangGraph) when:
- Building a straightforward RAG pipeline
- Single-agent tool use with no complex branching
- Quick prototyping of LLM-powered features
- You need the broadest ecosystem of integrations (vector stores, document loaders, etc.)
Use LangGraph when:
- Multi-step agent workflows that need cycles (retry, refine, iterate)
- Multi-agent collaboration (supervisor, swarm, hierarchical)
- You need crash-safe persistence and resumability
- Human-in-the-loop approval gates are required
- Production workloads where you need full control over agent behavior
Avoid when:
- Simple prompt-response applications (overkill)
- You want a high-level “just give me agents” abstraction (consider CrewAI)
- Your team is unfamiliar with graph-based programming models
- You are locked into a single LLM provider’s ecosystem (consider their native SDK)
LangGraph Platform (Cloud/Self-Hosted)
LangGraph Platform provides deployment infrastructure for LangGraph applications:
- LangGraph Server: HTTP API for running graphs (invoke, stream, cron jobs)
- LangGraph Studio: Visual IDE for debugging graph execution step-by-step
- Deployment options: LangGraph Cloud (managed), self-hosted on your infra
- Features: Background runs, double-texting handling, cron scheduling, webhook support
This is the recommended path for production deployments where you want managed infrastructure without building your own serving layer.