llm

Large Language Models

Code Completion

Code Completion is a large language models capability available through Mistral AI on Aweb. Fill-in-the-middle code completion with prefix and suffix context. Access it through a single unified API with automatic failover and intelligent routing.

Try Code CompletionAPI docs

Best for

Highest quality

Mistral AI

Premium tier

Contract

Max Latency2000ms

Providers (1)

ProviderScoreQualityPricing
Mistral AIDEFAULT
95premiumstandard

Quick start

Call Code Completion through Alfred — automatic provider selection, failover, and load balancing included.

cURL

curl -X POST https://api.alfred-ai.app/v1/execute \
  -H "Authorization: Bearer $ALFRED_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "capability": "llm.code-complete",
    "input": { "prompt": "Hello world" }
  }'

TypeScript

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Alfred automatically selects the best provider
const result = await alfred.execute({
  capability: 'llm.code-complete',
  input: { prompt: 'Hello world' },
});

console.log(result.output);

Orchestration pipeline

import { Alfred } from '@alfred/core';

const alfred = new Alfred({ apiKey: process.env.ALFRED_API_KEY });

// Multi-step pipeline with automatic failover
const result = await alfred.orchestrate({
  steps: [
    { id: 'step1', capability: 'llm.code-complete', input: { prompt: 'Hello world' } },
    { id: 'step2', capability: 'llm.chat', dependsOn: ['step1'],
      input: { prompt: 'Summarize: $step1.output' } },
  ],
});

Related Large Language Models capabilities

Chat Completion

llm

Streaming Chat

llm

Vision Analysis

llm

Structured Output

llm

Fast LLM Inference

llm

Getting started →API reference →All providers →All capabilities →