Skip to main content
The Traces API allows you to send trace data to Phoenix and manage trace annotations. Traces are automatically ingested using OpenTelemetry Protocol (OTLP) or can be annotated for evaluation purposes.

Endpoints

Send Traces

This endpoint is primarily used by OpenTelemetry SDKs and the Phoenix SDK. It accepts OTLP trace data in Protocol Buffer format.
POST /v1/traces
Send trace data to Phoenix using the OpenTelemetry Protocol.

Headers

Content-Type
string
required
Must be application/x-protobuf
Content-Encoding
string
Optional compression: gzip or deflate

Request Body

Binary Protocol Buffer data conforming to the OTLP ExportTraceServiceRequest schema.

Response

status
number
200 on success
Returns an ExportTraceServiceResponse in Protocol Buffer format.

Example

import requests
import gzip
from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import ExportTraceServiceRequest

# Create OTLP trace request
request = ExportTraceServiceRequest()
# ... populate with trace data ...

# Serialize and compress
body = gzip.compress(request.SerializeToString())

response = requests.post(
    "http://localhost:6006/v1/traces",
    data=body,
    headers={
        "Content-Type": "application/x-protobuf",
        "Content-Encoding": "gzip"
    }
)
Capacity Limits: The server will return HTTP 503 if the span processing queue is full. Implement retry logic with backoff for production systems.

Create Trace Annotations

POST /v1/trace_annotations
Create or update annotations (labels, scores, metadata) for traces.

Query Parameters

sync
boolean
default:"false"
If true, the request is processed synchronously and returns the created annotation IDs. If false, the request is queued for asynchronous processing.

Request Body

data
array
required
Array of trace annotation objects

Response

data
array
Array of created annotation objects (only returned when sync=true)

Example

import requests

url = "http://localhost:6006/v1/trace_annotations"
headers = {
    "Authorization": "Bearer your-api-key",
    "Content-Type": "application/json"
}

data = {
    "data": [
        {
            "trace_id": "abc123def456",
            "name": "correctness",
            "annotator_kind": "LLM",
            "result": {
                "score": 0.95,
                "label": "correct",
                "explanation": "The response accurately answers the question"
            },
            "metadata": {
                "model": "gpt-4",
                "temperature": 0.0
            }
        }
    ]
}

response = requests.post(url, json=data, headers=headers, params={"sync": True})
print(response.json())

Delete Trace

DELETE /v1/traces/{trace_identifier}
Delete an entire trace by its identifier.
This operation is permanent and will delete all spans in the trace and their associated data.

Path Parameters

trace_identifier
string
required
The trace identifier - either a Relay Global ID or an OpenTelemetry trace_id (hex string)

Response

Returns HTTP 204 (No Content) on success.

Example

import requests

url = "http://localhost:6006/v1/traces/abc123def456"
headers = {"Authorization": "Bearer your-api-key"}

response = requests.delete(url, headers=headers)
print(response.status_code)  # 204

Using the Phoenix SDK

For easier trace management, use the Phoenix Python SDK:
from phoenix.trace import using_project
from openinference.instrumentation import using_attributes

# Automatically instrument your application
with using_project("my-project"):
    # Your application code here
    # Traces are automatically sent to Phoenix
    pass
The SDK handles:
  • Automatic OTLP trace serialization
  • Connection management and retries
  • Batch processing for efficiency
  • Project name configuration
See the Tracing documentation for complete SDK usage.

Error Handling

404
error
Trace not found (for DELETE operations)
422
error
Invalid request body or trace data
503
error
Server at capacity - the span processing queue is full. Retry with exponential backoff.

Best Practices

Use Async Mode

Use sync=false (default) for trace annotations to avoid blocking on database writes

Batch Annotations

Send multiple trace annotations in a single request to reduce HTTP overhead

Handle 503 Errors

Implement retry logic with exponential backoff when the server is at capacity

Use Compression

Enable gzip compression for large trace payloads to reduce bandwidth