Headroom

Failure Learning

Offline failure analysis for coding agents. Analyzes past sessions, finds what went wrong, correlates with what fixed it, and writes project-level learnings.

headroom learn analyzes past coding agent sessions, finds what went wrong, correlates each failure with what eventually worked, and writes specific project-level learnings that prevent the same mistakes next session.

Quick Start

# See recommendations for current project (dry-run, no changes)
headroom learn

# Write recommendations to CLAUDE.md and MEMORY.md
headroom learn --apply

# Analyze a specific project
headroom learn --project ~/my-project --apply

# Analyze all projects
headroom learn --all --apply

Success Correlation

The core innovation. Instead of cataloging failures ("Read failed 5 times"), Headroom finds what the model did to fix each failure:

  • Failed: Read axion-formats/src/main/java/.../FirstClassEntity.java
  • Then succeeded: Read axion-scala-common/src/main/scala/.../FirstClassEntity.scala
  • Learning: "FirstClassEntity is at axion-scala-common/, not axion-formats/"

This produces specific, actionable corrections -- not generic advice.

What It Learns

Environment Facts

Which runtime commands work vs fail.

### Environment
- **Python**: use `uv run python` (not `python3` -- modules not available outside venv)

File Path Corrections

Wrong paths the model keeps guessing, with the correct locations.

### File Path Corrections
- `axion-common/src/.../AxionSparkConstants.scala`
  -> actually at `axion-spark-common/src/.../AxionSparkConstants.scala`

Search Scope

Which directories to search in (narrow paths fail, broader ones work).

### Search Scope
- Don't search `axion-model/` -> use `axion/` (the repo root)

Command Patterns

How commands should (and should not) be run.

### Command Patterns
- **user_prefers_manual**: User rejected gradle 18 times -- show the command, don't execute
- **python_runtime**: Use `uv run python` not `python3` (ModuleNotFoundError)

Known Large Files

Files that need offset/limit with Read.

### Known Large Files
- `proxy/server.py` (~8000 lines) -- always use offset/limit

Where Learnings Go

PatternDestinationWhy
Environment, paths, search scope, commands, large filesCLAUDE.mdStable project facts, version-controllable
Missing paths, retry patterns, permissionsMEMORY.mdMay change, agent-specific

CLAUDE.md lives in your project directory. MEMORY.md lives in ~/.claude/projects/*/memory/.

Marker-Based Updates

Headroom manages a clearly-delimited section in each file:

<!-- headroom:learn:start -->
## Headroom Learned Patterns
*Auto-generated by `headroom learn` -- do not edit manually*
...
<!-- headroom:learn:end -->

On re-run, only the content between markers is replaced. Your existing file content is preserved.

Architecture

The system is built with an adapter pattern so it can support multiple agent systems:

  • Scanners read tool-specific log formats (e.g., ~/.claude/projects/*.jsonl) and produce normalized ToolCall sequences
  • Analyzers work on ToolCall data -- same analysis logic for any agent system
  • Writers output to tool-specific context injection mechanisms (e.g., CLAUDE.md)

To add support for a new agent (e.g., Cursor), you write a Scanner that reads its log format and a Writer that outputs to .cursorrules. The analyzers stay the same.

CLI Reference

headroom learn [OPTIONS]

Options:
  --project PATH    Project directory to analyze (default: current directory)
  --all             Analyze all discovered projects
  --apply           Write recommendations (default: dry-run)
  --claude-dir PATH Path to .claude directory (default: ~/.claude)

Real-World Results

Tested on 67,583 tool calls across 23 projects:

MetricValue
Failure rate7.5% (5,066 failures)
Corrections extracted164 per project (avg)
Path corrections22 (axion project)
Search scope corrections24 (axion project)
Command patterns learned5 (axion project)

On this page