llm

Large Language Models

Chat Completion

Chat Completion is a large language models capability available through Anthropic, OpenAI, Groq and 3 more on Aweb. Synchronous text generation from conversational input. Access it through a single unified API with automatic failover and intelligent routing.

Try Chat CompletionAPI docs

Best for

Highest quality

Anthropic, OpenAI

Premium tier

Most affordable

Groq, Google Gemini

Economy tier

Contract

Max Latency30000ms
Min Qualityhigh

Providers (6)

ProviderScoreQualityPricing
AnthropicDEFAULT
95premiumpremium
OpenAI
85premiumpremium
Groq
92premiumeconomy
Mistral AI
88premiumstandard
Perplexity
80premiumstandard
Google Gemini
90premiumeconomy

Quick start

Call Chat Completion through Alfred — automatic provider selection, failover, and load balancing included.

cURL

curl -X POST https://api.alfred-ai.app/v1/execute \
  -H "Authorization: Bearer $ALFRED_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "capability": "llm.chat",
    "input": { "prompt": "Hello world" }
  }'

TypeScript

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Alfred automatically selects the best provider
const result = await alfred.execute({
  capability: 'llm.chat',
  input: { prompt: 'Hello world' },
});

console.log(result.output);

Orchestration pipeline

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Multi-step pipeline with automatic failover
const result = await alfred.orchestrate({
  steps: [
    { id: 'step1', capability: 'llm.chat', input: { prompt: 'Hello world' } },
    { id: 'step2', capability: 'llm.chat', dependsOn: ['step1'],
      input: { prompt: 'Summarize: $step1.output' } },
  ],
});

Related Large Language Models capabilities

Streaming Chat

llm

Vision Analysis

llm

Structured Output

llm

Fast LLM Inference

llm

Code Completion

llm

Getting started →API reference →All providers →All capabilities →