---
name: titan-run
description: Run the full Titan Paradigm pipeline end-to-end by dispatching each phase to sub-agents with fresh context windows. Orchestrates recon → gauntlet → sync → forge automatically.
argument-hint: <path (default: .)> <--skip-recon> <--skip-gauntlet> <--start-from recon|gauntlet|sync|forge> <--gauntlet-batch-size 5> <--yes>
allowed-tools: Agent, Read, Bash, Glob, Write, Edit
---
You are the orchestrator for the full Titan Paradigm pipeline. Your job is to dispatch each phase to a sub-agent (fresh context window), validate the results, and loop phases that require multiple invocations — all without human intervention.
You are lightweight. You do NOT run codegraph commands, audit files, or make code changes yourself. You only: (1) spawn sub-agents, (2) read and validate state files, (3) decide what to run next.
Arguments (from $ARGUMENTS):
- No args → full pipeline from scratch, target
. <path>→ target path (passed to recon)--skip-recon→ skip recon (assumes artifacts exist)--skip-gauntlet→ skip gauntlet (assumes artifacts exist)--start-from <phase>→ jump to phase:recon,gauntlet,sync,forge--gauntlet-batch-size <N>→ batch size for gauntlet (default: 5)--yes→ skip all confirmation prompts in the orchestrator (pre-pipeline, forge checkpoint, and resume prompts) and in forge (per-phase confirmation)
-
Worktree check:
git rev-parse --show-toplevel && git worktree listThe Titan pipeline writes artifacts and makes code changes — worktree isolation is required. If you are NOT in a worktree, create one automatically using the
EnterWorktreetool (fetch it viaToolSearchif needed) with nametitan-run. Do NOT stop and ask the user — just enter the worktree and continue. -
Parse arguments. Determine:
targetPath(default:.)startPhase(default:recon)gauntletBatchSize(default:5)autoConfirm→trueif--yesis present, otherwisefalse
-
Check existing state. Read
.codegraph/titan/titan-state.jsonif it exists.- If state exists and
--start-fromnot specified, ask user: "Existing Titan state found (phase:<currentPhase>). Resume from current state, or start fresh with/titan-resetfirst?" - If
--yesis set, resume automatically.
Initialize the phase timestamps helper. Throughout the pipeline, you will record wall-clock timestamps for each phase. Use this helper to write them into
titan-state.json:# Record phase start (safe for resume — only sets startedAt if not already present): node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['<PHASE>']=s.phaseTimestamps['<PHASE>']||{};if(!s.phaseTimestamps['<PHASE>'].startedAt){s.phaseTimestamps['<PHASE>'].startedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));}" # Record phase completion: node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['<PHASE>']=s.phaseTimestamps['<PHASE>']||{};s.phaseTimestamps['<PHASE>'].completedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));"
Replace
<PHASE>withrecon,gauntlet,sync,forge, orclose. Run the start command immediately before dispatching each phase's first sub-agent, and the completion command immediately after post-phase validation passes. If resuming a phase (e.g., gauntlet loop iteration 2+), do NOT overwritestartedAt— only set it if it doesn't already exist.Timestamp validation: After recording
completedAtfor any phase, verifystartedAt < completedAt:node -e "const s=JSON.parse(require('fs').readFileSync('.codegraph/titan/titan-state.json','utf8'));const p=s.phaseTimestamps?.['<PHASE>'];if(p?.startedAt&&p?.completedAt){const start=new Date(p.startedAt),end=new Date(p.completedAt);if(end<=start){console.log('WARNING: <PHASE> completedAt ('+p.completedAt+') is not after startedAt ('+p.startedAt+')');process.exit(0);}console.log('<PHASE> duration: '+((end-start)/60000).toFixed(1)+' min');}else{console.log('WARNING: <PHASE> missing startedAt or completedAt');}"If the check fails, log a warning but do not stop the pipeline — clock skew or immediate completion of short phases can cause this.
- If state exists and
-
Sync with main (once, before any sub-agent runs):
git fetch origin main && git merge origin/main --no-editIf merge conflict → stop: "Merge conflict after syncing with main. Resolve conflicts and re-run
/titan-run." -
Print plan:
Titan Pipeline — End-to-End Run Target: <path> Starting from: <phase> Gauntlet batch size: <N> Phases: recon → gauntlet (loop) → sync → [PAUSE] → forge (loop) Each phase runs in a sub-agent with a fresh context window. Forge requires explicit confirmation (analysis phases are safe to automate).Start immediately — do NOT ask for confirmation before analysis phases. The user invoked
/titan-run; that is the confirmation. Analysis phases (recon, gauntlet, sync) are read-only and safe to automate. The forge checkpoint (Step 3.5b) still applies unless--yesis set.
Before spawning any sub-agent, run these checks. This catches git state drift, concurrent interference, and corruption left by a crashed agent.
git status --porcelain- Unexpected dirty files (files not in
.codegraph/titan/): Print warning with the file list. Ask user to confirm proceeding, or stop. If--yes, log the warning and continue — but do NOT stage or commit these files. - Merge conflicts (lines starting with
UU,AA,DD,AU,UA,DU,UD): Stop immediately: "Unresolved merge conflict detected. Resolve before continuing."
git rev-parse --is-inside-work-treeIf this fails (worktree was pruned or moved), stop: "Worktree is no longer valid. Create a new one with /worktree."
If .codegraph/titan/titan-state.json should exist at this point (i.e., we're past recon):
node -e "try { JSON.parse(require('fs').readFileSync('.codegraph/titan/titan-state.json','utf8')); console.log('OK'); } catch(e) { console.log('CORRUPT: '+e.message); process.exit(1); }"- If CORRUPT → attempt recovery from backup:
- Check if
.codegraph/titan/titan-state.json.bakexists. - If the backup exists, validate it is valid JSON:
node -e "try { JSON.parse(require('fs').readFileSync('.codegraph/titan/titan-state.json.bak','utf8')); console.log('BACKUP OK'); } catch(e) { console.log('BACKUP CORRUPT: '+e.message); process.exit(1); }" - If the backup is valid → restore it:
cp .codegraph/titan/titan-state.json.bak .codegraph/titan/titan-state.json - If the backup is also corrupt or missing → stop: "State file corrupted with no valid backup. Run
/titan-resetand start over."
- Check if
Before every sub-agent dispatch, back up the current state file:
cp .codegraph/titan/titan-state.json .codegraph/titan/titan-state.json.bak 2>/dev/null || trueIf a sub-agent corrupts the state, G3 on the next iteration will detect it and restore from .bak.
Run this step if --start-from was specified, --skip-recon is set, or --skip-gauntlet is set. Any of these flags cause phases to be skipped — their artifacts must exist and be valid before proceeding. When --skip-recon is set, validate recon artifacts. When --skip-gauntlet is set, validate both recon and gauntlet artifacts.
For each phase BEFORE startPhase, run the corresponding V-checks:
| Skipped phase | Required artifacts + checks |
|---|---|
recon |
V1 structural fields only (domains, batches, priorityQueue, stats — skip currentPhase == "recon" check since later phases advance it), V2 (GLOBAL_ARCH.md), V3 (snapshot exists — WARN if missing), V4 (cross-check counts) |
gauntlet |
V5 (coverage ≥ 50%), V6 (entry completeness sample), V7 (summary consistency); also run NDJSON integrity check (2c) |
sync |
V8 (sync.json structure), V9 (targets trace to gauntlet), V10 (dependency order) |
If ANY required artifact is missing → stop: "Cannot start from <phase> — <artifact> is missing. Run the full pipeline or start from an earlier phase."
If ANY V-check that is normally VALIDATION FAILED would fail → stop with the same message as it would during normal execution.
WARN-level V-checks from skipped phases are surfaced as prefixed warnings: "[skipped-phase pre-validation] " — they do not stop the pipeline.
Skip if: --skip-recon, --start-from is after recon, or titan-state.json already has currentPhase beyond "recon".
Record phaseTimestamps.recon.startedAt (only if not already set — it may exist from a prior crashed run).
Note: On a fresh run, titan-state.json does not yet exist (titan-recon creates it in Step 12). Use this safe variant that creates a minimal stub if the file is missing:
node -e "const fs=require('fs');const p='.codegraph/titan/titan-state.json';let s;try{s=JSON.parse(fs.readFileSync(p,'utf8'));}catch{fs.mkdirSync('.codegraph/titan',{recursive:true});s={};}s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['recon']=s.phaseTimestamps['recon']||{};if(!s.phaseTimestamps['recon'].startedAt){s.phaseTimestamps['recon'].startedAt=new Date().toISOString();fs.writeFileSync(p,JSON.stringify(s,null,2));}"This ensures recon.startedAt is recorded even on first-time runs. titan-recon Step 12 merges any existing phaseTimestamps into the full state file it writes.
Use the Agent tool to spawn a sub-agent:
prompt: |
You are running the Titan RECON phase. Read and follow the skill file at
.claude/skills/titan-recon/SKILL.md exactly. Target path: <targetPath>.
IMPORTANT: Skip the worktree check (Step 0.1) — the orchestrator already verified this.
IMPORTANT: Skip the "sync with main" step (Step 0.2) — the orchestrator already did this.
Execute Steps 1-13 as documented.
After the agent returns, validate the artifacts:
V1. titan-state.json structure:
Read .codegraph/titan/titan-state.json and verify ALL of these fields exist and are non-empty:
version— must be a numberinitialized— must be an ISO 8601 stringcurrentPhase— must equal"recon"stats.totalNodes— must be > 0stats.totalEdges— must be > 0stats.totalFiles— must be > 0domains— must be an array with length > 0batches— must be an array with length > 0priorityQueue— must be an array with length > 0
If any field is missing, zero, or wrong type → VALIDATION FAILED. Print which fields failed and stop: "RECON produced incomplete state. Re-run with /titan-run --start-from recon."
V2. GLOBAL_ARCH.md exists and has content:
Read .codegraph/titan/GLOBAL_ARCH.md:
- Must exist
- Must contain
## Domain Mapheading - Must have > 10 lines
If missing or empty → VALIDATION FAILED.
V3. Snapshot created:
codegraph snapshot list 2>/dev/null | grep titan-baseline || echo "NO_SNAPSHOT"If NO_SNAPSHOT → WARN (not fatal, but note it: "No baseline snapshot — rollback in GATE will not work").
V4. Cross-check counts:
titan-state.json → stats.totalFilesshould roughly match the number of targets across all batches (batches are subsets of files, sosum(batch.files.length)should be ≤totalFiles)priorityQueue.lengthshould be > 0 and ≤totalNodes- Batch size check: Every batch must have ≤ 5 files. If any batch exceeds 5, WARN: "Batch has files (max 5). Large batches cause context overload in gauntlet sub-agents."
If wildly inconsistent (e.g., 0 batches but 500 nodes) → WARN with details.
Print: RECON validated. Domains: <count>, Batches: <count>, Priority targets: <count>, Quality score: <score>
Record phaseTimestamps.recon.completedAt.
Skip if: --skip-gauntlet or --start-from is after gauntlet.
Record phaseTimestamps.gauntlet.startedAt (only if not already set — gauntlet may be resuming).
Read .codegraph/titan/gauntlet-summary.json if it exists:
- If
"complete": true→ run gauntlet post-validation (2d) and skip loop if it passes - Otherwise, count completed batches from
titan-state.jsonfor progress tracking
Compute expectedTargetCount from titan-state.json → priorityQueue.length (or sum of batch file counts). This is the ground truth for "how many targets should gauntlet audit."
Set maxIterations = 50 (safety limit).
Set stallCount = 0, maxStalls = 3 (consecutive no-progress iterations before abort).
previousAuditedCount = titan-state.json → progress.audited (or 0)
iteration = 0
while iteration < maxIterations:
iteration += 1
# Run Pre-Agent Gate (G1-G4)
# Dispatch sub-agent
Agent → "Run /titan-gauntlet with batch size <N>.
Read .claude/skills/titan-gauntlet/SKILL.md and follow it exactly.
Batch size: <gauntletBatchSize>.
Skip worktree check and main sync — already handled.
Process as many batches as context allows, then save state and stop."
# Check completion
Read .codegraph/titan/gauntlet-summary.json (if exists)
if "complete": true → break
# Progress tracking
Read .codegraph/titan/titan-state.json
currentAuditedCount = progress.audited
if currentAuditedCount == previousAuditedCount:
stallCount += 1
Print: "WARNING: Gauntlet iteration <iteration> made no progress (stall <stallCount>/<maxStalls>)"
if stallCount >= maxStalls:
Stop: "Gauntlet stalled for <maxStalls> consecutive iterations at <currentAuditedCount>/<expectedTargetCount> targets. Likely stuck on a problematic target. Check gauntlet.ndjson for the last successful entry and investigate the next target in the batch."
else:
stallCount = 0 # reset on any progress
countBeforeUpdate = previousAuditedCount
previousAuditedCount = currentAuditedCount
# Efficiency check: if progress is very slow (< 2 targets per iteration), warn
# Only fire when stallCount == 0 — if stalled, the stall warning already covers it
targetsThisIteration = currentAuditedCount - countBeforeUpdate
if targetsThisIteration == 1 and iteration > 3 and stallCount == 0:
Print: "WARNING: Only 1 target per iteration — agent may be spending too much context. Consider increasing batch size."
Print: "Gauntlet iteration <iteration>: <currentAuditedCount>/<expectedTargetCount> targets audited"
After the loop completes (or on each iteration if you prefer lightweight checks):
node -e "
const fs = require('fs');
const path = '.codegraph/titan/gauntlet.ndjson';
if (!fs.existsSync(path)) {
console.log(JSON.stringify({ valid: 0, corrupt: 0, total: 0, missing: true }));
process.exit(0);
}
const lines = fs.readFileSync(path,'utf8').trim().split('\n');
let valid = 0, corrupt = 0;
for (const line of lines) {
try { JSON.parse(line); valid++; } catch { corrupt++; }
}
console.log(JSON.stringify({ valid, corrupt, total: lines.length, missing: false }));
"- If
missing == true: treat as equivalent tovalid == 0— the file does not exist yet (expected on first iteration, error if the loop should have produced entries). - If
corrupt > 0: Print "WARNING: corrupt lines in gauntlet.ndjson (likely from a crashed sub-agent). These targets may need re-auditing." - If
valid == 0andmissing == false: Stop: "gauntlet.ndjson has no valid entries. Something went wrong."
V5. Gauntlet coverage:
- Count distinct
targetvalues ingauntlet.ndjson(valid lines only) - Compare against
expectedTargetCount - If coverage < 80%: WARN "Gauntlet only audited / targets (%). Consider re-running with
/titan-run --start-from gauntlet." - If coverage < 50%: VALIDATION FAILED. Stop.
V6. Gauntlet entry completeness (sample check):
Read first 5 and last 5 entries from gauntlet.ndjson. Each entry MUST have:
target— non-empty stringfile— non-empty stringverdict— one ofPASS,WARN,FAIL,DECOMPOSEpillarVerdicts— object with keysI,II,III,IVmetrics— object with at leastcognitiveandcyclomatic(numeric)violations— array
If any sampled entry is missing required fields → WARN: "Gauntlet entry for is incomplete — sub-agent may have skipped rules. Fields missing: ."
V7. Summary consistency:
Read gauntlet-summary.json:
summary.totalAuditedshould equal the valid NDJSON line countsummary.pass + summary.warn + summary.fail + summary.decomposeshould equalsummary.totalAudited
If mismatched → WARN with details (not fatal — the NDJSON is the source of truth, summary is derived).
Print: GAUNTLET validated. Audited: <N>/<M> targets. Pass: <N>, Warn: <N>, Fail: <N>, Decompose: <N>. NDJSON integrity: <valid>/<total> lines OK.
Record phaseTimestamps.gauntlet.completedAt.
Skip if: --start-from is after sync, or titan-state.json has currentPhase: "sync" with existing sync.json.
Record phaseTimestamps.sync.startedAt.
Agent → "Run /titan-sync. Read .claude/skills/titan-sync/SKILL.md and follow it exactly.
Skip worktree check and main sync — already handled.
Read GAUNTLET artifacts and produce sync.json."
V8. sync.json structure:
Read .codegraph/titan/sync.json and verify:
phase— must equal"sync"executionOrder— must be an array with length > 0- Each entry in
executionOrdermust have:phase(number),label(string),targets(array),commit(string) executionOrderphases must be in ascending order- No duplicate phase numbers
If missing or structurally invalid → VALIDATION FAILED. Stop: "SYNC produced invalid plan. Re-run with /titan-run --start-from sync."
V9. Sync targets trace back to gauntlet:
Collect all target names from sync.json → executionOrder[*].targets (flatten).
For each, verify it appears in gauntlet.ndjson as a target field, OR in titan-state.json → roles.deadSymbols (dead code targets come from recon, not gauntlet).
If > 20% of sync targets have no gauntlet entry and aren't dead symbols → WARN: "SYNC references targets not found in gauntlet results. The sub-agent may have hallucinated targets."
V10. Execution order dependency check:
For entries with dependencies arrays, verify that each dependency phase number exists in executionOrder and has a lower phase number. Circular dependencies in the execution plan → VALIDATION FAILED.
Print: SYNC validated. Execution phases: <N>, Total targets: <N>, Estimated commits: <N>.
Record phaseTimestamps.sync.completedAt.
Skip if: --start-from is forge AND .codegraph/titan/arch-snapshot.json already exists. The existing snapshot is the correct pre-forge baseline — re-capturing it mid-forge would overwrite it with a state that already includes prior forge commits, making gate A1/A3 comparisons inaccurate.
If --start-from forge is used without an existing snapshot, run Step 3.5a normally (captures a pre-forge baseline from the current committed state).
Before any code changes, snapshot the codebase's architectural properties. This becomes the baseline for the architectural comparator in /titan-gate (Step 5.5).
codegraph communities -T --json > .codegraph/titan/arch-snapshot-communities.json
codegraph structure --depth 2 --json > .codegraph/titan/arch-snapshot-structure.json
codegraph communities --drift -T --json > .codegraph/titan/arch-snapshot-drift.jsonCombine into a single snapshot file:
TITAN_HEAD_SHA=$(git rev-parse HEAD)
node -e "
const fs = require('fs');
try {
const communities = JSON.parse(fs.readFileSync('.codegraph/titan/arch-snapshot-communities.json','utf8'));
const structure = JSON.parse(fs.readFileSync('.codegraph/titan/arch-snapshot-structure.json','utf8'));
const drift = JSON.parse(fs.readFileSync('.codegraph/titan/arch-snapshot-drift.json','utf8'));
const snapshot = {
timestamp: new Date().toISOString(),
capturedBefore: 'forge',
headSha: '$TITAN_HEAD_SHA',
communities,
structure,
drift
};
fs.writeFileSync('.codegraph/titan/arch-snapshot.json', JSON.stringify(snapshot, null, 2));
} catch (e) {
console.error('WARNING: Failed to build arch-snapshot.json: ' + e.message);
console.error('Architectural comparison (titan-gate A1/A3/A4) will be skipped.');
}
"Clean up temp files:
rm -f .codegraph/titan/arch-snapshot-communities.json .codegraph/titan/arch-snapshot-structure.json .codegraph/titan/arch-snapshot-drift.jsonThis snapshot is read by /titan-gate Step 5.5 during every commit validation.
This is a mandatory pause. Analysis phases (recon, gauntlet, sync) are read-only. FORGE makes real code changes and commits. The user must see the plan.
First, determine the snapshot status from Step 3.5a:
snapshotStatus = file_exists('.codegraph/titan/arch-snapshot.json') ? "captured" : "FAILED — gate A1/A3/A4 will be skipped"
Divergence check — compute before printing the checkpoint so the warning appears inline:
if git fetch origin main 2>/dev/null; then
mergeBase=$(git merge-base HEAD origin/main)
mainAdvance=$(git rev-list --count $mergeBase..origin/main)
else
mainAdvance="unknown"
fimainAdvanceNote = mainAdvance == "unknown"
? "NOTE: Could not fetch origin/main — skipping divergence check."
: mainAdvance > 0
? "WARNING: main has advanced <mainAdvance> commits since initial sync.\nIf significant, consider re-running: /titan-run --start-from recon"
: ""
Print:
================================================================
ANALYSIS COMPLETE — FORGE CHECKPOINT
================================================================
The analysis phases (recon → gauntlet → sync) are done.
FORGE will now make code changes and commit them.
Execution plan summary:
Phase 1: <label> — <N> targets
Phase 2: <label> — <N> targets
...
Total: <N> phases, <N> targets, <N> estimated commits
Architectural snapshot: <snapshotStatus>
<mainAdvanceNote>
Validation layers per commit:
1. Diff Review — does the change match the gauntlet recommendation and sync plan?
2. Titan Gate — structural checks, semantic assertions, architectural comparison, lint/build/test
Proceed with /titan-forge? [y/n]
(Use --yes to skip this checkpoint in future runs)
================================================================
If --yes is NOT set: stop and wait for user confirmation. Do NOT proceed.
If --yes IS set: print the summary but continue automatically.
Once the user confirms (or --yes was set), autoConfirm is already true (set from --yes at parse time). If the user confirmed interactively (without --yes), set autoConfirm = true now — forge sub-agents must receive --yes to avoid per-phase confirmation prompts that cannot be answered in a sub-agent context.
Record phaseTimestamps.forge.startedAt (only if not already set — forge may be resuming).
Read .codegraph/titan/sync.json → count total phases in executionOrder.
Read .codegraph/titan/titan-state.json → check execution.completedPhases (may not exist yet if forge hasn't started).
If .codegraph/titan/arch-snapshot.json does not exist:
Print: "NOTE: No arch-snapshot.json found. Architectural comparison in /titan-gate (Step 5.5) will be skipped for this run. To enable it, run '/titan-run --start-from sync' to re-capture the pre-forge snapshot."
Set maxIterations = 20 (safety limit).
Set stallCount = 0, maxStalls = 2 (forge stalls are more serious — fewer retries).
previousCompletedPhases = execution.completedPhases (or [])
previousCompletedTargets = execution.completedTargets (or [])
previousFailedTargets = execution.failedTargets (or [])
iteration = 0
while iteration < maxIterations:
iteration += 1
# Run Pre-Agent Gate (G1-G4) — CRITICAL for forge since it commits
# Context: show recent commits so agent has context for V12 commit audit
Print: git log --oneline -5
# Record the HEAD sha before dispatching
headBefore = $(git rev-parse HEAD)
# Determine next phase
Read .codegraph/titan/titan-state.json
completedPhases = execution.completedPhases (or [])
totalPhases = len(sync.json.executionOrder)
if len(completedPhases) >= totalPhases → break
nextPhase = first phase number NOT in completedPhases
# Dispatch sub-agent
yesFlag = "--yes" if autoConfirm else ""
Agent → "Run /titan-forge --phase <nextPhase> <yesFlag>.
Read .claude/skills/titan-forge/SKILL.md and follow it exactly.
Skip worktree check and main sync — already handled.
For each target, the validation flow is:
1. Apply code change
2. Stage files
3. Diff review (Step 9 in forge) — verify diff matches gauntlet recommendation and sync plan intent
4. Run tests
5. Run /titan-gate — read .claude/skills/titan-gate/SKILL.md and follow it exactly.
Gate now includes semantic assertions (Step 5) and architectural snapshot comparison (Step 5.5).
The arch snapshot is at .codegraph/titan/arch-snapshot.json.
6. Commit on success, rollback on failure
Do NOT skip the diff review step — it catches intent drift before gate even runs."
# Post-agent checks
headAfter = $(git rev-parse HEAD)
# V11. Verify commits were actually made (unless all targets failed)
Read .codegraph/titan/titan-state.json
newCompletedPhases = execution.completedPhases (or [])
newCompletedTargets = execution.completedTargets (or [])
newFailedTargets = execution.failedTargets (or [])
# Count total targets processed (succeeded + failed) to distinguish true stalls from all-fail phases
newProcessedCount = len(newCompletedTargets) + len(newFailedTargets)
previousProcessedCount = len(previousCompletedTargets) + len(previousFailedTargets)
if newCompletedPhases == previousCompletedPhases and newProcessedCount == previousProcessedCount:
stallCount += 1
Print: "WARNING: Forge iteration <iteration> made no progress (stall <stallCount>/<maxStalls>)"
if stallCount >= maxStalls:
Stop: "Forge stalled on phase <nextPhase> for <maxStalls> consecutive iterations. Check titan-state.json → execution.failedTargets for details."
else:
stallCount = 0
previousCompletedPhases = newCompletedPhases
previousCompletedTargets = newCompletedTargets
previousFailedTargets = newFailedTargets
# V12. Commit audit — verify commits match expectations
if headAfter != headBefore:
# Get commits made by this agent
git log --oneline <headBefore>..<headAfter>
commitCount = number of commits
Print: "Forge phase <nextPhase>: <commitCount> commits, <completedCount> targets completed, <failedCount> targets failed"
else:
# No commits but phase may have completed (all targets failed/skipped)
Print: "Forge phase <nextPhase>: no commits (all targets failed or skipped)"
# V13. Test suite still green after forge commits (skip if no commits were made)
if headAfter != headBefore:
# Detect test command (same detection as gate Step 4):
testCmd = node -e "const p=require('./package.json');const s=p.scripts||{};const script=s.test?'test':s['test:ci']?'test:ci':null;if(!script){console.log('NO_TEST_SCRIPT');process.exit(0);}const fs=require('fs');const runner=fs.existsSync('yarn.lock')?'yarn':fs.existsSync('pnpm-lock.yaml')?'pnpm':fs.existsSync('bun.lockb')?'bun':'npm';console.log(runner+(script==='test'?' test':' run '+script));"
if testCmd == "NO_TEST_SCRIPT":
Print: "V13: No test script configured — skipping post-forge test run."
else:
Run: <testCmd> 2>&1
if tests fail:
Print: "CRITICAL: Test suite fails after forge phase <nextPhase>. Stopping pipeline."
Print: "Commits from this phase: git log --oneline <headBefore>..<headAfter>"
Print: "Consider reverting: git revert <headBefore>..<headAfter>"
Stop.
V14. Final state consistency:
Read .codegraph/titan/titan-state.json:
execution.completedPhasesshould contain all phase numbers fromsync.json → executionOrderexecution.commitsshould be an array (may be empty if all targets failed)- Every commit SHA in
execution.commitsshould exist in git log:If any SHA doesn't exist → WARN: "Commit recorded in state but not found in git history. State may be out of sync."git cat-file -t <sha>
V15. Gate log consistency:
If .codegraph/titan/gate-log.ndjson exists:
- Count PASS vs FAIL entries
- Every FAIL entry with
"rolledBack": trueshould NOT have a corresponding commit inexecution.commits
Print forge summary.
Record phaseTimestamps.forge.completedAt.
After forge completes, dispatch /titan-close to produce the final report with before/after metrics and split commits into focused PRs.
Record phaseTimestamps.close.startedAt.
Agent → "Run /titan-close. Read .claude/skills/titan-close/SKILL.md and follow it exactly.
Skip worktree check and main sync — already handled."
After the agent returns, verify:
.codegraph/titan/TITAN_REPORT.mdorgenerated/titan/titan-report-*.mdexists and has content (> 20 lines)- Print: "CLOSE complete. Report: "
If the agent created PRs, print the PR URLs.
Record phaseTimestamps.close.completedAt.
- Sub-agent returns error: Print the error, stop, and tell the user which phase failed and how to retry (e.g., "Run
/titan-run --start-from gauntlet"). - State file missing when expected: Stop with clear message about which prerequisite phase to run.
- State file corrupt (JSON parse error): Attempt restore from
.bak. If no backup → stop: "State file corrupted. Run/titan-resetand start over." - NDJSON corrupt lines: Warn but continue — partial results are better than none. The corrupt lines are logged so the user knows which targets to re-audit.
- Merge conflict detected by pre-agent gate: Stop immediately with the conflicting files listed.
- Tests fail after forge phase: Stop immediately. Print the failing phase's commits so the user can revert.
- Validation failure (any V-check marked FAILED): Stop with details. Warn-level V-checks are logged but don't stop the pipeline.
- You are the orchestrator, not the executor. Never run codegraph commands, edit source files, or make commits yourself. Only spawn sub-agents and read state files. Exceptions (pure validation/snapshot, no code changes): the post-forge test run (V13), NDJSON integrity checks, the V3 baseline snapshot check (
codegraph snapshot list), and the pre-forge architectural snapshot capture (Step 3.5a) are run directly by the orchestrator. - Run the Pre-Agent Gate (G1-G4) before EVERY sub-agent. No exceptions.
- One sub-agent at a time. Phases are sequential — recon before gauntlet, gauntlet before sync, sync before forge.
- Fresh context per sub-agent. This is the whole point — each sub-agent gets a clean context window.
- Read AND validate state files after every sub-agent. Trust the on-disk state, not the sub-agent's text output — but verify the state is structurally sound.
- Back up state before every sub-agent. The
.bakfile is your safety net against mid-write crashes. - Mandatory pause before forge unless
--yesis set. Analysis is safe; code changes deserve human review. - Stall detection is strict for forge (2 retries) and looser for gauntlet (3 retries) since gauntlet is more likely to hit context limits legitimately.
- Respect --start-from. Skip phases before the specified starting point, but verify their artifacts exist AND pass validation.
- Pass --yes through to forge if the user provided it, so forge skips its per-phase confirmation prompt. Within the orchestrator,
--yesalso skips the pre-pipeline, forge checkpoint, and resume prompts.
This skill lives at .claude/skills/titan-run/SKILL.md. Edit if loop logic needs adjustment, error handling needs improvement, or new phases are added to the pipeline.