Skip to main content
Phoenix is built on OpenTelemetry and provides auto-instrumentation for popular LLM frameworks, SDKs, and providers. Tracing integrations allow you to observe your LLM application’s runtime behavior with minimal code changes.

Supported Frameworks

Phoenix provides first-class support for the most popular LLM application frameworks:

LangChain

Auto-instrument LangChain applications in Python and JavaScript

LlamaIndex

Trace LlamaIndex queries, retrievals, and agent workflows

OpenAI

Monitor OpenAI API calls including GPT-4, embeddings, and agents

Anthropic

Instrument Claude API calls with streaming support

OpenTelemetry

Custom instrumentation using OpenTelemetry primitives

LLM Providers

Phoenix supports instrumentation for all major LLM providers:

Python Integrations

ProviderPackageDescription
OpenAIopeninference-instrumentation-openaiGPT models, embeddings, and function calling
Anthropicopeninference-instrumentation-anthropicClaude models with streaming support
Google GenAIopeninference-instrumentation-google-genaiGemini and PaLM models
AWS Bedrockopeninference-instrumentation-bedrockAmazon Bedrock models
MistralAIopeninference-instrumentation-mistralaiMistral models
VertexAIopeninference-instrumentation-vertexaiGoogle Vertex AI models
Groqopeninference-instrumentation-groqGroq inference engine
LiteLLMopeninference-instrumentation-litellmUnified interface for 100+ LLMs

JavaScript/TypeScript Integrations

ProviderPackageDescription
OpenAI@arizeai/openinference-instrumentation-openaiOpenAI Node.js SDK
LangChain.js@arizeai/openinference-instrumentation-langchainLangChain JavaScript framework
Vercel AI SDK@arizeai/openinference-vercelVercel AI SDK streaming
BeeAI@arizeai/openinference-instrumentation-beeaiBeeAI agent framework

Agent Frameworks

Phoenix supports popular agent and workflow frameworks:
  • DSPy - openinference-instrumentation-dspy
  • CrewAI - openinference-instrumentation-crewai
  • Haystack - openinference-instrumentation-haystack
  • Guardrails - openinference-instrumentation-guardrails
  • Instructor - openinference-instrumentation-instructor
  • Agno - openinference-instrumentation-agno
  • Pydantic AI - openinference-instrumentation-pydantic-ai

Getting Started

Most integrations follow a simple three-step pattern:
1

Install the instrumentation package

pip install openinference-instrumentation-{provider}
2

Register Phoenix tracer

from phoenix.otel import register

tracer_provider = register(
  project_name="my-llm-app",
  auto_instrument=True  # Auto-instrument based on installed packages
)
3

Run your application

Your LLM calls will now be automatically traced and sent to Phoenix.

How It Works

Phoenix integrations use OpenTelemetry to capture traces from your LLM applications:
  1. Auto-instrumentation: Instrumentation packages automatically patch SDK methods to capture trace data
  2. OpenInference conventions: Traces follow standardized semantic conventions for LLM observability
  3. OTLP export: Trace data is exported to Phoenix using the OpenTelemetry Protocol (OTLP)
  4. Zero-code changes: Most integrations require no changes to your application code
All Phoenix integrations are open source and part of the OpenInference project.

Next Steps

OpenAI Integration

Get started with OpenAI tracing

LangChain Integration

Instrument LangChain applications

Custom Instrumentation

Build custom traces with OpenTelemetry

All Integrations

Browse all available integrations