LangGraph Quickstart

Python
LLM
LangGraph
Setting up a LangGraph graph, invoking it, and streaming responses
Author

Kedar Dabhadkar

Published

January 8, 2026

Installation

pip install langgraph langchain-openai

Define State and Nodes

from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI

# Define state schema
class State(TypedDict):
    messages: Annotated[list, add_messages]

# Initialize LLM
llm = ChatOpenAI(model="gpt-4o-mini")

# Define node function
def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

Build and Compile the Graph

# Build graph
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)

# Compile
graph = graph_builder.compile()

Invoke the Graph

# Basic invocation - returns final state
result = graph.invoke({"messages": [{"role": "user", "content": "Hello!"}]})
print(result["messages"][-1].content)

Streaming Methods

LangGraph offers three streaming approaches:

1. Stream State Updates (stream)

Yields state after each node completes. Good for tracking graph progress.

for state in graph.stream({"messages": [{"role": "user", "content": "Hi"}]}):
    print(state)  # {'chatbot': {'messages': [AIMessage(...)]}}

2. Stream LLM Tokens (astream_events)

Streams individual tokens from LLM calls. Best for real-time chat UIs.

async for event in graph.astream_events(
    {"messages": [{"role": "user", "content": "Hi"}]},
    version="v2"
):
    if event["event"] == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="", flush=True)

3. Stream with Mode Selection

Use stream_mode parameter for different output formats.

# "values" - full state at each step
for state in graph.stream(input, stream_mode="values"):
    print(state["messages"][-1])

# "updates" - only changes from each node (default)
for update in graph.stream(input, stream_mode="updates"):
    print(update)

# "messages" - stream message chunks directly
for msg, metadata in graph.stream(input, stream_mode="messages"):
    print(msg.content, end="")

4. Custom Streaming ModeDefine your own streaming format using stream_mode="custom" with a StreamWriter.

from langgraph.config import get_stream_writerdef my_node(state: State):    writer = get_stream_writer()        # Stream custom data during node execution    writer({"status": "processing", "step": 1})    result = llm.invoke(state["messages"])    writer({"status": "complete", "step": 2})        return {"messages": [result]}# Consume custom streamfor chunk in graph.stream(input, stream_mode="custom"):    print(chunk)  # {"status": "processing", "step": 1}, etc.

Quick Reference| Method | Use Case | Output ||——–|———-|——–|| invoke() | Simple calls | Final state || stream() | Node-by-node progress | State updates || stream(stream_mode="messages") | Chat UI streaming | Message chunks || stream(stream_mode="custom") | Custom progress/status | User-defined data || astream_events() | Fine-grained token streaming | All LLM events |