observability

Observability

LLM Traces

LLM Traces is a observability capability available through Arize Phoenix, Sentry on Aweb. Distributed tracing for LLM calls and chains. Access it through a single unified API with automatic failover and intelligent routing.

Try LLM TracesAPI docs

Best for

Highest quality

Arize Phoenix, Sentry

Premium tier

Most affordable

Arize Phoenix

Economy tier

Contract

Max Latency5000ms

Providers (2)

ProviderScoreQualityPricing
Arize PhoenixDEFAULT
80premiumeconomy
Sentry
85premiumstandard

Quick start

Call LLM Traces through Alfred — automatic provider selection, failover, and load balancing included.

cURL

curl -X POST https://api.alfred-ai.app/v1/execute \
  -H "Authorization: Bearer $ALFRED_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "capability": "observability.traces",
    "input": { "prompt": "Hello world" }
  }'

TypeScript

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Alfred automatically selects the best provider
const result = await alfred.execute({
  capability: 'observability.traces',
  input: { prompt: 'Hello world' },
});

console.log(result.output);

Orchestration pipeline

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Multi-step pipeline with automatic failover
const result = await alfred.orchestrate({
  steps: [
    { id: 'step1', capability: 'observability.traces', input: { prompt: 'Hello world' } },
    { id: 'step2', capability: 'llm.chat', dependsOn: ['step1'],
      input: { prompt: 'Summarize: $step1.output' } },
  ],
});

Related Observability capabilities

LLM Evaluations

observability

LLM Experiments

observability

Error Tracking

observability

Alert Management

observability

AI Guardrails

observability

Getting started →API reference →All providers →All capabilities →