Three upgrades to how PAI builds, manages, and improves skills — inspired by Ross Mike's skills-first methodology.
| File | What It Does |
|---|---|
skills/SkillForge/SKILL.md |
Interactive 4-phase skill-creation workflow |
reference/CapabilityRegistry.md |
Example: capability registry extracted for progressive disclosure |
reference/PRDReference.md |
Example: PRD reference material extracted for progressive disclosure |
patches/skilldoctor-learn-hook.md |
The SkillDoctor hook to add to your Algorithm LEARN phase |
tools/build-skill-index.py |
Python script to index all your skills into a searchable JSON |
The problem: Most people write skills by hand or let AI generate them without context of a successful workflow run. The skill looks right but fails on edge cases.
The fix: A 4-phase methodology:
- Walk Through — Do the workflow step by step with your AI. Let it attempt each step. Correct when wrong. These corrections are gold.
- Succeed — Complete the full workflow end-to-end successfully.
- Codify — Tell the AI: "Review what we just did. Create a skill." It writes the SKILL.md with context of the successful run.
- Test — Run the skill fresh. If it fails, enter recursive improvement.
Install: Copy skills/SkillForge/SKILL.md into your skills directory (e.g., ~/.claude/skills/Utilities/SkillForge/SKILL.md).
Use: Say "skill forge" or "build a skill" to trigger it.
The problem: Large skill files load everything into context every turn, burning tokens on reference material that's only needed during specific phases.
The fix: Extract reference-only sections into sub-files. The main skill file keeps a summary + pointer:
**PROGRESSIVE DISCLOSURE:** Read `~/.claude/skills/PAI/Reference/CapabilityRegistry.md`
now for the full 25-capability registry...The AI reads the reference file only when it reaches the phase that needs it.
How to apply:
- Measure your main skill file's token count:
wc -w SKILL.md(multiply by 1.3 for rough token estimate) - Identify sections only needed during specific Algorithm phases
- Extract those to
Reference/*.mdfiles - Replace with a summary + Read instruction
Example: See reference/ directory for two real extractions that saved ~5,300 tokens.
The problem: When a skill fails during a session, the immediate issue gets fixed but the skill file never updates. The same failure can recur in the next session.
The fix: Add a hook to the Algorithm LEARN phase that detects skill failures and proposes updates.
How it works:
- If a Skill tool was invoked AND criteria failed → LEARN phase identifies the issue
- Proposes a specific skill update
- Asks for approval before editing
- Logs to
skill-failures.jsonlfor pattern tracking
Install: See patches/skilldoctor-learn-hook.md for the exact text to append to your Algorithm's LEARN phase.
Index all your skills into a searchable JSON:
python3 tools/build-skill-index.pyOutputs skill-index.json with name, path, category, description, and trigger phrases for every skill. Use during Algorithm OBSERVE phase for capability matching.
"The worst thing you can do is identify a workflow and jump to creating a skill right away. Walk with the agent step by step. Once you have a successful run, THEN tell it to create the skill."
"I don't download skills. Your agent needs the context of YOUR successful run."
"Scale for productivity, not for what looks cool."
- Claude Code (PAI v4.0+)
- Skills directory structure (
~/.claude/skills/) - Algorithm with LEARN phase (for SkillDoctor)
All changes are additive (new files) or extractive (content moved, not deleted). Nothing breaks existing functionality. Back up your current skill files before applying the Token Diet.