Skip to main content
Xerg surfaces five finding kinds. Two are confirmed waste. Three are savings opportunities that should be tested rather than treated as proven waste. For the conceptual view of how these categories are meant to be interpreted, see waste taxonomy.

Finding taxonomy

KindTerminal labelClassificationConfidenceMeaning
retry-wasteRetry wastewastehighFailed calls were followed by more work, so that spend is pure retry overhead.
loop-wasteLoop wastewastehighA run reached at least 7 iterations, and spend after iteration 5 is treated as likely loop waste.
context-outlierContext bloatopportunitymediumA workflow with at least 3 runs had one or more runs whose input token volume was far above its own baseline.
idle-spendIdle wasteopportunitymediumA workflow name looks like a recurring heartbeat, cron, monitor, or poll loop.
candidate-downgradeDowngrade candidatesopportunitylowAn expensive model appears to be used on a simple operational workflow. Treat this as an A/B test candidate.

What counts as confirmed waste

Only retry-waste and loop-waste count toward:
  • wasteSpendUsd
  • structuralWasteRate
  • --fail-above-waste-rate
  • --fail-above-waste-usd
The opportunity classes roll up into opportunitySpendUsd and are shown as directional savings opportunities.

Recommendation objects

When you run xerg audit --json, Xerg adds a recommendations array derived from the findings. Each recommendation uses the @xerg/schemas shape:
FieldMeaning
idStable recommendation id
findingIdThe finding this recommendation came from
kindThe source finding kind
titleHuman-readable next step
descriptionWhy this recommendation exists
estimatedSavingsUsdExpected dollar impact
confidencehigh, medium, or low
actionTypeOne of model-switch, cache-config, prompt-trim, dedup, or other
suggestedChangeOptional structured hint for automation

Current recommendation patterns

Finding kindTypical action
retry-wasteadd retry backoff or lower retry count
loop-wastecap iteration depth or add an early exit
context-outliertrim prompt or context size
idle-spendreduce cadence or switch to event-driven work
candidate-downgradeA/B test a cheaper model

What not to overread

  • Opportunity findings are directional recommendations, not proven waste.
  • A candidate-downgrade finding is intentionally low-confidence and should be treated as an experiment, not an automatic downgrade order.
  • Cost per outcome is intentionally unavailable in v0.