pyqual is intentionally small (~800 lines). It orchestrates, not implements. It reads metrics from tools you already use.
pyqual integrates with planfile to manage tickets from TODO.md and GitHub Issues.
pyqual tickets todo # sync TODO.md through planfile
pyqual tickets github # sync GitHub issues through planfile
pyqual tickets all # sync both TODO.md and GitHubEnable automatic ticket sync on gate failure:
loop:
on_fail: create_ticket # triggers planfile TODO sync- TODO.md: pyqual uses planfile's markdown backend to parse and sync checklist items
- GitHub Issues: pyqual uses planfile's GitHub backend to sync issues with your repository
- Automatic sync: When
on_fail: create_ticketis set, failed quality gates trigger TODO.md synchronization
planfile is included as a dependency. Ensure you have .planfile/ directory initialized in your project root.
stages:
- name: analyze
run: code2llm ./ -f toon,evolutionpyqual reads from analysis_toon.yaml or analysis.toon:
SUMMARY:
CC̄=2.5 # average cyclomatic complexity
critical=0 # critical issues countMetrics extracted: cc, critical
stages:
- name: validate
run: vallm batch ./ --recursive --errors-json > .pyqual/errors.jsonpyqual reads from validation_toon.yaml or reads .pyqual/errors.json:
SUMMARY:
scanned: 100
passed: 95 (95.0%) # vallm_pass metric
warnings: 5Metrics extracted: vallm_pass, error_count
stages:
- name: test
run: pytest --cov --cov-report=json:.pyqual/coverage.jsonpyqual reads from .pyqual/coverage.json (pytest-cov output):
{
"totals": {
"percent_covered": 92.5
}
}Metrics extracted: coverage
stages:
- name: pylint
run: pylint --output-format=json . > .pyqual/pylint.json 2>/dev/null || truepyqual reads .pyqual/pylint.json (list of messages or dict with score):
Metrics extracted: pylint_score, pylint_errors, pylint_fatal, pylint_error, pylint_warnings
stages:
- name: ruff
run: ruff check . --output-format=json > .pyqual/ruff.json 2>/dev/null || trueMetrics extracted: ruff_errors, ruff_fatal, ruff_warnings
stages:
- name: flake8
run: flake8 --format=json . > .pyqual/flake8.json 2>/dev/null || trueMetrics extracted: flake8_violations, flake8_errors, flake8_warnings
stages:
- name: bandit
run: bandit -r . -f json -o .pyqual/bandit.json 2>/dev/null || trueMetrics extracted: bandit_high, bandit_medium, bandit_low, bandit_total
stages:
- name: radon
run: radon mi . -j > .pyqual/radon.json 2>/dev/null || trueMetrics extracted: maintainability_index, radon_cc
stages:
- name: interrogate
run: interrogate --generate-badge=never --format=json . > .pyqual/interrogate.jsonMetrics extracted: docstring_coverage, docstring_missing
stages:
- name: pip-audit
run: pip-audit --format=json --output=.pyqual/vulns.json 2>/dev/null || trueMetrics extracted: vuln_critical, vuln_high, vuln_medium, vuln_total
Use the built-in llx-fix preset for automatic code repair:
stages:
- name: prefact
tool: prefact # analyze issues → TODO.md
when: any_stage_fail
optional: true
- name: fix
tool: llx-fix # apply fixes from TODO.md
when: any_stage_fail
optional: true
timeout: 1800Or use aider for AI pair-programming:
stages:
- name: aider-fix
tool: aider
when: any_stage_fail
optional: trueThe MCP workflow:
- Analyzes the project via
llx_analyze - Builds a fix/refactor prompt from gate failures or
TODO.mdissues - Calls
aiderthrough the MCP service - Saves results to
.pyqual/llx_mcp.json
Use pyqual mcp-refactor when you want the same flow framed as a refactor task rather than a bugfix.
See examples/llm_fix/ and examples/llx/ for Docker-based and standalone setups.
📖 pyqual works with many other AI coding agents too — Claude Code, Codex CLI, Gemini CLI, Cursor, Windsurf, Cline. See AI Fix Tools for complete examples.
Extend GateSet._collect_metrics() or build a plugin:
from pyqual.gates import GateSet
from pathlib import Path
class MyGateSet(GateSet):
def _collect_metrics(self, workdir: Path) -> dict[str, float]:
metrics = super()._collect_metrics(workdir)
metrics.update(self._from_my_tool(workdir))
return metrics
def _from_my_tool(self, workdir: Path) -> dict[str, float]:
return {"my_metric": 42.0}Or use the plugin system (see Plugin API and examples/custom_plugins/).
| Tool | Output File | Metrics | Optional? |
|---|---|---|---|
| code2llm | analysis_toon.yaml |
cc, critical |
Yes |
| vallm | validation_toon.yaml |
vallm_pass, error_count |
Yes |
| pytest | .pyqual/coverage.json |
coverage |
Yes |
| pylint | .pyqual/pylint.json |
pylint_score, pylint_errors, pylint_fatal, pylint_warnings |
Yes |
| ruff | .pyqual/ruff.json |
ruff_errors, ruff_fatal, ruff_warnings |
Yes |
| flake8 | .pyqual/flake8.json |
flake8_violations, flake8_errors, flake8_warnings |
Yes |
| bandit | .pyqual/bandit.json |
bandit_high, bandit_medium, bandit_low |
Yes |
| radon | .pyqual/radon.json |
maintainability_index, radon_cc |
Yes |
| interrogate | .pyqual/interrogate.json |
docstring_coverage, docstring_missing |
Yes |
| pip-audit | .pyqual/vulns.json |
vuln_critical, vuln_high, vuln_total |
Yes |
| planfile | .planfile/ |
Ticket management (TODO.md, GitHub) | Yes |
| llx MCP | .pyqual/llx_mcp.json |
AI fix/refactor results | Yes |
| llx fix | (code changes) | Applies fixes from TODO.md | Yes |
| prefact | TODO.md |
Issue detection for llx fix | Yes |
| custom | any | any | — |
All integrations are optional. Stages can be any shell commands.
- Linters pipeline — ruff, pylint, flake8, mypy, interrogate
- Security scanning — bandit, pip-audit, trufflehog, SBOM
- LLM fix/refactor (Docker) — Dockerized llx MCP workflow
- LLX integration — standalone llx pipeline
- Multi-gate pipeline — combining all tools