| name | review-plan |
|---|---|
| description | Review a planning artifact (plan, shells, or spec) by running internal and peer reviews in parallel and returning combined findings. Use when the user asks to "review my plan", "review my shells", "review my spec", "check my plan", "check my shells", "check my spec", "critique my plan", "critique my shells", "critique my spec", or wants feedback on a planning artifact. |
Review a planning artifact against type-specific criteria. Runs internal review and /peer-review in parallel by default. Returns combined structured findings.
- Explicit argument — If the user specified a type (e.g., "review shells", "review spec"), use it. No argument defaults to plan.
- Conversation context — If artifact text or a path is already in context, infer the type.
- Auto-detect — Check
.turbo/for existing artifacts. If multiple types exist, useAskUserQuestion.
- Plan text in conversation — use it
- Explicit path — read it
- Explicit slug — resolve to
.turbo/plans/<slug>.md - Single file — Glob
.turbo/plans/*.md. If exactly one file exists, use it - Most recent — most recently modified file
- Legacy fallback —
.turbo/plan.mdif.turbo/plans/does not exist - Nothing found — use
AskUserQuestionto ask what to review
- Shell text in conversation — use it
- Explicit spec slug — Glob
.turbo/shells/<slug>-*.md - Explicit spec path — derive slug from filename, glob as above
- Single spec — Glob
.turbo/specs/*.md. If exactly one, derive slug and glob for shells - Most recent spec — most recently modified spec, derive slug and glob
- Nothing found — use
AskUserQuestionto ask what to review
For shells, read each shell file and parse its YAML frontmatter (spec, depends_on). Read the source spec from the spec field.
- Spec text in conversation — use it
- Explicit path — read it
- Explicit slug — resolve to
.turbo/specs/<slug>.md - Single file — Glob
.turbo/specs/*.md. If exactly one, use it - Most recent — most recently modified
- Legacy fallback —
.turbo/spec.mdif.turbo/specs/does not exist - Nothing found — use
AskUserQuestionto ask what to review
If multiple candidates exist and the choice is non-obvious, use AskUserQuestion.
Read the reference file for the resolved type:
- Plan — references/plan-review.md
- Shells — references/shells-review.md
- Spec — references/spec-review.md
Skip peer review when the caller asked (e.g., "without peer review", "no peer", "internal only"). For shells, the internal review focuses on structural wiring and skips the project context read.
Use the Agent tool to launch all agents below in a single message (model: "opus", do not set run_in_background) so they run concurrently:
- Internal Agent: Pass the artifact text and the reference file's content; the subagent reads project context (CLAUDE.md, relevant codebase files) before applying criteria, then returns findings in the output format below.
- Peer review Agent (unless skipping): Instruct the subagent to invoke
/peer-reviewvia the Skill tool with a prompt embedding the full reference file content, structured with<task>,<dig_deeper_nudge>, and<structured_output_contract>XML tags.
Aggregate findings with attribution (reviewer: "internal" or "peer"). Present them in the output format below.
Check your task list for remaining tasks and proceed.
Return findings as a numbered list. For each finding:
### [P<N>] <title (imperative, ≤80 chars)>
**<Location>:** <plan section, shell number(s), or spec section>
**Reviewer:** <internal | peer>
<one paragraph explaining the issue and its impact>
After all findings, add:
## Overall Verdict
**Readiness:** <ready | needs revision>
<1-3 sentence assessment>
If there are no qualifying findings, state so and explain briefly.
- Present findings grouped by priority.