Skip to content
View camerontjs-dot's full-sized avatar

Block or report camerontjs-dot

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
camerontjs-dot/README.md

Cameron Sanderson

Sterile pharmaceutical QA/QC technician moving into applied AI systems work. Eight plus years in regulated manufacturing under cGMP, GLP/GDP, USP <797>, Health Canada, and FDA 21 CFR 210/211. The same operating style I use for deviations, CAPA, and validation, I now apply to AI: traceability, failure-mode thinking, explicit uncertainty, and quality gates before outputs are trusted.

Based in Richmond Hill, Ontario. Bilingual English and French.

What I build

I work by directing and reviewing model-generated code, and I am actively learning to read and write Python directly. The systems I design tend to share a few patterns: modular roles, structured outputs, provenance labels, validation sweeps, and confidence calibration so the result does not overclaim what the inputs support.

Public portfolio

Tripwire — a local-first template that turns an AI coding agent into a personal research analyst and portfolio skeptic. The system is built around provenance, disconfirming evidence, calibrated language, validation hooks before structured records are written, and an append-only audit log. It is designed to challenge a thesis rather than confirm it: leading with counterevidence, surfacing what the agent cannot verify, and replacing predictions with observable conditions that should trigger reconsideration. Agent-neutral, tested with Claude, Codex, and Gemini. Currently v0.1.0.

Career Decision Engine — a browser-based decision-support tool for comparing job offers and career paths. Relative scoring across five dimensions, rule checks separated from the weighted score, calibrated confidence labels for clear leads and close calls, and a guided intake that derives starting weights from tradeoff answers. Engine logic is separated from UI and covered by browser tests, an invariant sweep, an output-quality sweep, and a 6,309-case combination matrix.

Live demo: camerontjs-dot.github.io/career-decision-engine

Private systems

Two larger systems are in active use but not yet public. I can walk through the architecture in interviews without exposing sensitive content.

  • Command Center — a local multi-context AI workflow system with specialized modes for research, analysis, portfolio management, and decision support. Built around modular instruction files, structured outputs, provenance, and scope control.
  • The Registered Edge — a content workflow with 20-plus custom skill modules for research synthesis, drafting, review, formatting, and quality control. Designed to reduce AI-tell writing, surface uncertainty, preserve reasoning context, and separate human judgment from model-generated support.

Where I am calibrated

I have not published formal ML research. My evidence is execution-based rather than credential-based, and my strongest contribution to AI work is the validation posture I bring from regulated quality environments. The public portfolio is still being expanded.

Contact

Pinned Loading

  1. career-decision-engine career-decision-engine Public

    Dependency-free decision-support tool for comparing job offers and career paths with relative scoring, rule checks, calibrated uncertainty, and validation sweeps.

    JavaScript 1

  2. Tripwire Tripwire Public template

    local-first ai-agents claude provenance audit-log personal-finance epistemic agent-template python

    Python 1