Headroom

Error Handling

How to catch and handle Headroom errors in Python and TypeScript. Error hierarchy, proxy error mapping, and safety guarantees.

Headroom provides explicit exceptions for debugging, with a core safety guarantee: compression failures never break your LLM calls. If compression fails, the original content passes through unchanged.

Error Hierarchy

HeadroomError (base class)
  +-- HeadroomConnectionError   # Cannot reach proxy
  +-- HeadroomAuthError         # 401 from proxy
  +-- HeadroomCompressError     # Compression failed (with statusCode)
  +-- ConfigurationError        # Invalid configuration
  +-- ProviderError             # Provider issues
  +-- StorageError              # Storage failures
  +-- TokenizationError         # Token counting failed
  +-- CacheError                # Cache operations failed
  +-- ValidationError           # Validation failures
  +-- TransformError            # Transform execution failed
import {
  ,
  ,
  ,
  ,
  ,
  ,
  ,
} from 'headroom-ai';
HeadroomError (base class)
  +-- ConfigurationError     # Invalid configuration
  +-- ProviderError          # Provider issues (unknown model, etc.)
  +-- StorageError           # Database/storage failures
  +-- CompressionError       # Compression failures (rare)
  +-- ValidationError        # Setup validation failures
from headroom import (
    HeadroomError,
    ConfigurationError,
    ProviderError,
    StorageError,
    CompressionError,
    ValidationError,
)

Catching Errors

import { , , , ,  } from 'headroom-ai';

try {
  const  = await (messages, { : 'gpt-4o' });
} catch () {
  if ( instanceof ) {
    .('Cannot reach proxy:', .message);
  } else if ( instanceof ) {
    .('Auth failed:', .message);
  } else if ( instanceof ) {
    .(`Compress failed (${.statusCode}):`, .message);
  } else if ( instanceof ) {
    .('Headroom error:', .message, .details);
  }
}
from headroom import (
    HeadroomClient,
    HeadroomError,
    ConfigurationError,
    StorageError,
)

try:
    client = HeadroomClient(...)
    response = client.chat.completions.create(...)

except ConfigurationError as e:
    print(f"Config issue: {e}")
    print(f"Details: {e.details}")

except StorageError as e:
    print(f"Storage issue: {e}")
    # Headroom continues to work, just without metrics persistence

except HeadroomError as e:
    print(f"Headroom error: {e}")

Error Types in Detail

ConfigurationError

Raised when configuration is invalid.

import {  } from 'headroom-ai';

// ConfigurationError is thrown when the proxy returns
// a configuration_error type in its error response
try:
    client = HeadroomClient(
        original_client=OpenAI(),
        provider=OpenAIProvider(),
        default_mode="invalid_mode",  # Will raise ConfigurationError
    )
except ConfigurationError as e:
    print(f"Config error: {e}")
    print(f"Field: {e.details.get('field')}")

ProviderError

Raised for provider-specific issues (unknown model, API error, token counting failure).

try:
    response = client.chat.completions.create(
        model="unknown-model-xyz",
        messages=[...],
    )
except ProviderError as e:
    print(f"Provider error: {e}")
    print(f"Provider: {e.details.get('provider')}")

StorageError

Raised when database operations fail. Storage errors do not affect core functionality -- the application can continue without historical metrics.

try:
    metrics = client.get_metrics()
except StorageError as e:
    metrics = []  # Continue without historical metrics

CompressionError

Raised when compression fails (rare). In practice, compression errors are caught internally and the original content passes through unchanged. This exception is only raised in strict mode.

HeadroomConnectionError (TypeScript)

Raised when the TypeScript SDK cannot connect to the Headroom proxy.

import { ,  } from 'headroom-ai';

try {
  await (messages, { : 'gpt-4o' });
} catch () {
  if ( instanceof ) {
    .('Is the proxy running? Start with: headroom proxy');
  }
}

Proxy Error Mapping

The TypeScript SDK automatically maps proxy error responses to the correct error class:

HTTP StatusProxy Error TypeTypeScript Class
401--HeadroomAuthError
4xx/5xxconfiguration_errorConfigurationError
4xx/5xxprovider_errorProviderError
4xx/5xxstorage_errorStorageError
4xx/5xxtokenization_errorTokenizationError
4xx/5xxvalidation_errorValidationError
4xx/5xxtransform_errorTransformError
4xx/5xx(other)HeadroomCompressError

The mapProxyError() function handles this mapping:

import {  } from 'headroom-ai';

const  = (400, 'configuration_error', 'Invalid mode');
// Returns a ConfigurationError instance

Error Details

All Headroom exceptions include a details dict/object with additional context:

import {  } from 'headroom-ai';

// HeadroomError.details is Record<string, any> | undefined
// HeadroomCompressError also has .statusCode and .errorType
try:
    client = HeadroomClient(...)
except HeadroomError as e:
    print(f"Error: {e}")
    print(f"Type: {type(e).__name__}")
    print(f"Details: {e.details}")
    # Details might include:
    # - field: which config field caused the error
    # - provider: which provider was involved
    # - model: which model was requested
    # - original_error: underlying exception

Safety Guarantee

If compression fails, the original content passes through unchanged. Your LLM calls never fail due to Headroom:

messages = [
    {"role": "tool", "content": "malformed json {{{"}
]

# This will NOT raise an exception
# The malformed content passes through unchanged
response = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
)

Best Practices

  1. Catch specific exceptions rather than broad Exception to avoid hiding real bugs
  2. Let StorageError pass -- storage errors do not affect core compression functionality
  3. Validate on startup with client.validate_setup() to catch configuration issues early
  4. Enable logging at WARNING level to see when compression is skipped
import logging
logging.basicConfig(level=logging.WARNING)
# WARNING:headroom.transforms.smart_crusher:Skipping compression: invalid JSON

On this page