Skip to main content
Phoenix has first-class support for LangChain applications in both Python and JavaScript. LangChain is one of the most popular frameworks for building LLM applications, and Phoenix makes it easy to trace the entire execution graph.

Installation

pip install openinference-instrumentation-langchain langchain_openai

Setup

1

Register Phoenix tracer

from phoenix.otel import register

# Configure the Phoenix tracer
tracer_provider = register(
  project_name="my-llm-app",  # Default is 'default'
  auto_instrument=True  # Auto-instrument based on installed packages
)
2

Use LangChain as normal

All LangChain operations will now be automatically traced!

Basic Example

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

# Define a simple chain
prompt = ChatPromptTemplate.from_template("{x} {y} {z}?").partial(x="why is", z="blue")
chain = prompt | ChatOpenAI(model_name="gpt-3.5-turbo")

# Run the chain - automatically traced!
result = chain.invoke(dict(y="sky"))
print(result.content)

What Gets Traced

Phoenix automatically captures:
  • Chains: Input/output, intermediate steps, latency
  • Agents: Tool selection, reasoning, execution
  • Retrievers: Query, retrieved documents, scores
  • LLM Calls: Model, prompts, tokens, parameters
  • Tools: Function calls, inputs, outputs
  • Errors: Exceptions, retries, failures

Advanced Examples

RAG Chain with Retriever

from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

# Create vector store
vectorstore = FAISS.from_texts(
    ["Phoenix is an LLM observability tool", "LangChain is a framework for LLMs"],
    embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()

# Create RAG chain
prompt = ChatPromptTemplate.from_template(
    "Answer based on context: {context}\n\nQuestion: {question}"
)
model = ChatOpenAI(model="gpt-3.5-turbo")

rag_chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

# Phoenix traces the entire RAG pipeline
result = rag_chain.invoke("What is Phoenix?")
print(result)

Agent with Tools

from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool

@tool
def get_word_length(word: str) -> int:
    """Returns the length of a word."""
    return len(word)

tools = [get_word_length]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant"),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Phoenix traces the agent's reasoning and tool use
result = agent_executor.invoke({"input": "How long is the word 'observability'?"})
print(result["output"])

Streaming Chains

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI(model="gpt-3.5-turbo")
chain = prompt | model

# Stream the response
for chunk in chain.stream({"topic": "observability"}):
    print(chunk.content, end="", flush=True)

# Phoenix still captures the full trace

Observability in Phoenix

Once instrumented, you can:
  • Visualize chain execution graphs with all intermediate steps
  • Debug agent reasoning and tool selection
  • Inspect retrieved documents and relevance scores
  • Monitor token usage across all LLM calls
  • Track latency for each chain component
  • Analyze errors and retry behavior

Resources

Python Example Notebook

Complete LangChain tutorial

Python Package

View Python source

TypeScript Package

View TypeScript source

LangChain Docs

LangChain documentation