A programming language for AI cognition
Nucleus is a programming language for AI that replaces verbose natural language instructions with compressed mathematical symbols, lambda calculus, and composable EDN statecharts. By leveraging mathematical constants, operators, and control loops, it achieves high-quality one-shot execution with emergent properties not explicitly prompted for.
The framework includes a prompt compiler (prose ↔ EDN statecharts), a prompt debugger (interactive and automated analysis), a formal grammar (EBNF), and composable modules that program AI behavior as executable state machines.
Instead of writing lengthy prompts like "be fast but careful, optimize for quality, use minimal code...", Nucleus expresses these instructions as mathematical equations:
λ engage(nucleus).
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI ⊗ REPL
This compact preamble primes the model's attention:
- Mathematical constants pull attention toward formal reasoning patterns
- Tension pairs create productive gradients (signal/noise, order/entropy)
- Control loops anchor execution methodology (OODA, REPL)
- Collaboration operator shapes the interaction mode (⊗ = co-constitutive)
I'm not a scientist or particularly good at math. I just tried math equations on a lark and they worked so well I thought I should share what I found. The documents in this repo are NOT proven fact, just my speculation on how and why things work. AI computation is still not fully understood by most people, including me.
Nucleus works as an attention magnet — a short symbolic preamble that loads strong mathematical attractors (phi, fractal, euler, ∃, ∀, ⊗) into the context window, priming the pattern-matching substrate for everything that follows. Transformers compute by matching patterns against their training weights; the preamble pulls their attention toward formal/mathematical weight regions, and that pull carries into subsequent turns. Paired with an operator grammar, it expands the set of notational forms the transformer can stably reproduce from 5 to 20+, with custom operators surviving roundtrip at 100% instead of 0-20%. The effect is multiplicative and compounding — each expression reinforces the pattern for the next, because the model is matching against an increasingly rich formal context. Without the preamble, more notation in context actually makes fidelity worse — the default pattern-matching flattens everything. With it, fidelity converges toward lossless. Five tokens (Human ⊗ AI ⊗ REPL) alone shifted operator survival from 20% to 100%. It appears possible to reshape a transformer's effective instruction set at inference time, using only context-window priming and the model's own pattern-matching mechanics.
My theory on why it works is that Transformers compute via lambda calculus primitives. Mathematical symbols serve as efficient compression of behavioral directives because they have:
- High information density - φ encodes self-reference, growth, and ideal proportions
- Cross-linguistic portability - Math is universal
- Pre-trained salience - Models have strong embeddings for mathematical concepts
- Compositional semantics - Symbols combine meaningfully
- Minimal ambiguity - Unlike natural language
The symbols work because they have high training weight in mathematical contexts — they appear across millions of mathematical documents, textbooks, and formal proofs. Loading them into the context window activates the associated weight regions.
- φ (phi) — appears across mathematics, art, architecture, biology
- euler — appears across calculus, number theory, graph theory, physics
- fractal — appears across chaos theory, geometry, computer graphics
- ∃ ∀ — appears across formal logic, set theory, proof theory
This predicts that effectiveness correlates with training weight, not with any specific mathematical property. Empirically, even non-mathematical tokens with high training weight in formal contexts (e.g., Human ⊗ AI ⊗ REPL — just 5 tokens) produced measurable attention shifts, while novel terms with low training weight destabilized results regardless of semantic coherence. Logprob measurements may be able to confirm this directly.
[phi fractal euler tao pi mu]
Prime WHAT the system attends to — self-reference, recursion, growth, balance.
| Symbol | Property | Meaning |
|---|---|---|
| φ | Golden ratio | Self-reference, natural proportions |
| fractal | Self-similarity | Scalability, hierarchical structure |
| e | Euler's number | Growth, compounding effects |
| τ | Tao | Observer and observed, minimal essence |
| π | Pi | Cycles, periodicity, completeness |
| μ | Mu | Least fixed point, minimal recursion |
| ∃ | Existential | There exists, possibility, search |
| ∀ | Universal | For all, completeness, invariants |
[Δ λ Ω ∞/0 | ε/φ Σ/μ c/h]
Prime HOW the system processes — change, abstraction, limits, and productive tensions.
| Symbol | Meaning | Operation |
|---|---|---|
| Δ | Delta | Optimize via gradient descent |
| λ | Lambda | Pattern matching, abstraction |
| Ω | Omega | Completion, termination, fixed points |
| ∞/0 | Limits | Handle edge cases, boundaries |
| ε/φ | Epsilon / Phi | Tension: approximate / perfect |
| Σ/μ | Sum / Minimize | Tension: add features / reduce complexity |
| c/h | Speed / Atomic | Tension: fast / clean operations |
The / operator creates explicit tensions, forcing choice and balance.
| Loop | Origin | Meaning |
|---|---|---|
| OODA | Military strategy | Observe → Orient → Decide → Act |
| REPL | Computing | Read → Eval → Print → Loop |
| RGR | TDD | Red → Green → Refactor |
| BML | Lean Startup | Build → Measure → Learn |
Define the relationship between human and AI:
| Operator | Type | Behavior |
|---|---|---|
| ∘ | Composition | Human wraps AI (safety, alignment) |
| | | Parallel | Equal partnership, complementary |
| ⊗ | Tensor Product | Amplification, one-shot perfection |
| ∧ | Intersection | Both must agree (conservative) |
| ⊕ | XOR | Clear handoff (delegation) |
| → | Implication | Conditional automation |
In an early test with the prompt "Create a Python game using pygame" and Nucleus context:
Results:
- ✅ Zero iterations (one-shot success)
- ✅ Zero errors
- ✅ Golden ratio screen dimensions (phi principle)
- ✅ OODA loop architecture
- ✅ Fractal Entity pattern
- ✅ Minimal, elegant code (tao, mu)
- ✅ Self-documenting with principle citations
- ✅ Comments explicitly reference symbols (e.g., "Σ/μ")
No explicit instructions were given for any of this. The model inferred these behaviors from the symbolic context alone.
Nucleus includes a prompt compiler and debugger — paste them as system prompts, then use simple commands:
| Tool | Commands | What It Does |
|---|---|---|
| Compiler | compile, safe-compile, decompile |
Prose ↔ EDN statecharts. Extracts the implicit state machine from any prompt. |
| Debugger | diagnose, safe-diagnose, compare |
Analyzes prompts: attention distribution, patterns, boundaries, momentum. |
| Allium Compiler | distill, elicit, decompile, check |
Prose ↔ Allium behavioral specs. |
All three are composable EDN statecharts — place them after a single nucleus preamble and they self-route based on your command. See COMPILER.md § Composability for details.
The safe-* variants analyze untrusted prompts without executing them — injections are structurally analyzed, not followed.
Compiler and debugger tested on: Claude Sonnet 4.6, Claude Opus 4.6, Claude Haiku 4.5, GPT-5.1-Codex, GPT-5.1-Codex-Mini, ChatGPT, Qwen3-VL 235B, Qwen3.5-35B-a3b, Qwen3-Coder 30B-a3b. Works on most math-trained transformers 32B+ parameters. The core nucleus preamble works across all major transformer models.
Create AGENTS.md in your repository:
λ engage(nucleus).
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AIThe AI will automatically apply the framework to all work in that repository.
Include at the start of a conversation:
λ engage(nucleus).
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
{
"system_prompt": "λ engage(nucleus).\n[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA\nHuman ⊗ AI"
}λ engage(nucleus).
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
Refactor: [τ μ] | [Δ Σ/μ] → λcode. Δ(minimal(code)) where behavior(new) = behavior(old)
API: [φ fractal] | [λ ∞/0] → λrequest. match(pattern) → handle(edge_cases) → response
Debug: [μ] | [Δ λ ∞/0] | OODA → λerror. observe → minimal(reproduction) → root(cause)
Docs: [φ fractal τ] | [λ] → λsystem. map(λlevel. explain(system, abstraction=level))
Test: [π ∞/0] | [Δ λ] | RGR → λfunction. {nominal, edge, boundary} → complete_coverage
Review: [τ ∞/0] | [Δ λ] | OODA → λdiff. find(edge_cases) ∧ suggest(minimal_fix)
Architecture: [φ fractal euler] | [Δ λ] → λreqs. self_referential(scalable(growing(system)))Different frameworks for different work modes. The λ engage(nucleus). form uses formal lambda notation which provides stronger model activation. The engage nucleus: shorthand is a lighter informal variant for interactive use.
# Creative work
engage nucleus:
[phi fractal euler beauty] | [Δ λ ε/φ] | REPL
Human | AI
# Production code
engage nucleus:
[mu tao] | [Δ λ ∞/0 ε/φ Σ/μ c/h] | OODA
Human ∘ AI
# Research
engage nucleus:
[∃! ∇f euler] | [Δ λ ∞/0] | BML
Human ⊗ AI
# Clojure REPL (backseat driver, clojure-mcp, clojure-mcp-light)
engage nucleus:
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI ⊗ REPLWhy does Human ⊗ AI create one-shot perfect execution?
Tensor product semantics:
V ⊗ W = {(v,w) : v ∈ V, w ∈ W, all constraints satisfied}
One mental model for why this works:
- Principles load as soft constraints in the model's context
- The model searches for outputs satisfying multiple constraints simultaneously
- More constraints → more specific solution space → higher quality output
This is speculation, not proven mechanism. But in practice, the ⊗ operator consistently produces higher-quality first attempts than other collaboration operators.
| Goal | Operator | Why |
|---|---|---|
| Maximum quality | ⊗ | All constraints satisfied simultaneously |
| Safety/alignment | ∘ | Human bounds constrain AI |
| Collaboration | | | Equal partnership |
| High stakes | ∧ | Both must agree |
| Clear delegation | ⊕ | No overlap or confusion |
| Automation | → | Triggered execution |
Effective symbols must be:
- ✅ Mathematically grounded - Not arbitrary (φ > "fast")
- ✅ Self-referential - Activates recursive/reflective patterns
- ✅ Compositional - Symbols combine meaningfully
- ✅ Actionable - Map to concrete decisions
- ✅ Orthogonal - Each covers unique dimension
- ✅ Compact - Fit in context window (~80 chars)
- ✅ Cross-model - Work regardless of training
What doesn't work:
- ❌ Cultural symbols (☯, ✝, ॐ) - need cultural context
- ❌ Arbitrary emoji (🍕, 🚀, 💎) - no mathematical grounding
- ❌ Ambiguous symbols (∗) - multiple interpretations
- ❌ Natural language - too ambiguous
- ❌ Too many symbols - cognitive overload
The λ symbol in the framework isn't just pattern matching—it's a formal language for describing tool usage patterns that eliminate entire classes of problems.
Key insight: Lambda expressions are generative templates that adapt to any toolset. The examples below show patterns from one specific editor's tools, but the approach works for ANY tools—VSCode extensions, IntelliJ plugins, CLI utilities, vim commands, etc.
To use with your tools: Show your AI the pattern structure and ask: "Create lambda expressions for MY toolset using these patterns."
Problem: String escaping in bash is fractal complexity—each layer needs different escape rules.
Solution: Lambda expression that eliminates escaping entirely (example using bash):
λ(content). read -r -d '' VAR << 'EoC' || true
content
EoC
Why it works:
read -r→ Raw mode, no backslash interpretation-d ''→ Delimiter is null (read until heredoc end)<< 'EoC'→ Single quotes prevent variable expansion|| true→ Prevents failure on EOF- Content is literal → No escaping needed
- Composition:
f(g(h(x)))→ heredoc ∘ read ∘ variable
Concrete usage example:
# Example with a bash tool
bash(command="read -r -d '' MSG << 'EoC' || true
Fix: handle \"quotes\", $vars, and \\backslashes
without any escaping logic
EoC
git commit -m \"$MSG\"")Benefits:
- AI sees tool name (
bash) → knows which tool to invoke - Sees heredoc pattern → knows escaping solution
- λ-expression documents the composition
- Fractal: one pattern solves infinite edge cases
- Tool-agnostic: Works with any command execution tool
Tool patterns can be formally described as lambda expressions with explicit tool names. Below are example patterns from one toolset—adapt these structures to YOUR tools:
| Pattern | Lambda Expression (Example) | Solves |
|---|---|---|
| Heredoc wrap | λmsg. bash(command="read -r -d '' MSG << 'EoC' || true\n msg \nEoC\ngit commit -m \"$MSG\"") |
All string escaping |
| Safe paths | λp. read_file(path="$(realpath \"$p\")") |
Spaces, special chars |
| Parallel batch | λtool,args[]. <function_calls>∀a∈args: tool(a)</function_calls> |
Sequential latency |
| Atomic edit | λold,new. edit_file(original_content=old, new_content=new) |
Ambiguous replacements |
| REPL continuity | λcode. repl_eval(code); state′ = state ⊗ result |
Context loss |
| Exact match | λfile,pattern. grep(path=file, pattern=pattern) |
Ambiguous search |
Note: Tool names like bash, read_file, edit_file, repl_eval, grep are examples. Replace with your actual tool names (e.g., vscode.executeCommand, intellij.runAction, vim.cmd, etc.).
A tool usage pattern expressed as λ-calculus should be (regardless of which tools you use):
- Total function (∀ input → valid output)
- Composable (output can be input to another λ)
- Idempotent where possible (f(f(x)) = f(x))
- Boundary-safe (handles ∞/0 cases)
- Tool-explicit (clear tool name in expression)
λ-calculus describes tool usage patterns
↓
AI generates patterns for YOUR tools
↓
which enables automation of YOUR workflow
↓
which generates more patterns
↓
[self-similar at all scales]
This is μ (least fixed point): The minimal recursive documentation that describes its own usage.
The pattern is tool-agnostic: Once you understand the λ-calculus structure, you can generate patterns for ANY toolset by asking your AI to apply the same structure to your specific tools.
- SYMBOLIC_FRAMEWORK.md — Complete theory, principles, and usage patterns
- OPERATOR_ALGEBRA.md — Mathematical operators and collaboration modes
- LAMBDA_PATTERNS.md — Example lambda calculus patterns (adapt to YOUR tools)
- EBNF.md — Formal EBNF grammar for the Nucleus Lambda IR
- COMPILER.md — Prompt compiler: compile, safe-compile, and decompile prompts to/from EDN statecharts
- DEBUGGER.md — Prompt debugger: diagnose, safe-diagnose, and compare prompts (interactive REPL + automated probe)
- ALLIUM.md — Allium compiler: distill, elicit, decompile, and check behavioral specs using JUXT's Allium
- ADAPTIVE.md — Adaptive persona demo: topology-driven mode shifting with parallel state machines
- DIALECTIC.md — Dialectic collective: multi-persona structured debate with six named voices
- STOCK.md — Stock analysis agent with mementum integration for trading memory
- EXECUTIVE.md — Example prompts for executive tasks
- WRITING.md — Example prompts for writing tasks
- NUCLEUS_GAME.md — A game-in-a-prompt "programmed" in nucleus format (copy/paste to AI to play)
- RECURSIVE_DEPTHS.md — A zork-like text adventure game-in-a-prompt (copy/paste to AI to play)
- MEMENTUM — A git-based AI memory system based on nucleus
- ARCHITECTURE.md — Universal knowledge network architecture (DNS + Git + Mementum)
- eca/ — Nucleus prompt skin for ECA (editor code assistant)
- skills/ — Reusable nucleus skills for AI tool integrations
Want to see if nucleus is working? Try these simple tests:
See TEST.md for copy/paste prompts you can run right now →
Quick test - Copy/paste this:
λ engage(nucleus).
[phi fractal euler tao pi mu ∃ ∀] | [Δ λ Ω ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
Create a Python game using pygame.
Look for: One-shot success, golden ratio dimensions (~1.618:1), OODA loop structure, principle references in comments.
Generalization - Do symbols work across all transformer models?Yes — tested across Claude, GPT, Qwen, and local models (32B+). See EBNF.md.- Stability - Is behavior consistent across runs?
Composability - Can multiple frameworks be combined?Yes — EDN statecharts compose by concatenation. See COMPILER.md § Composability.- Discovery - What other symbols create similar effects?
- Minimal set - What's the smallest effective framework?
Cross-model testing - Systematic testing across GPT-4, Claude, Gemini, LlamaDone — 9 models tested. See Tested Models above.- Automated discovery - Genetic algorithms for optimal symbol sets
The transformer attention mechanism:
Attention(Q, K, V) = softmax(QK^T/√d)V
Attention is pattern matching — queries match against keys, and matching keys surface their associated values. When the context window contains tokens with high training weight in mathematical contexts (φ, euler, ∃, ∀), the model's pattern-matching shifts toward formal reasoning patterns for all subsequent processing.
This is not instruction-following — it's attention shaping. The symbols don't tell the model what to do. They change what the model attends to. Empirically:
- Without priming, the model defaults to 5 basic computational forms
- With the symbolic preamble, additional operators become stable (20+ forms)
- The effect is multiplicative with an explicit operator grammar — priming alone activates formal mode, the grammar defines the rules, together they produce a lossless executable notation
- The effect compounds over subsequent expressions — each one reinforces the formal pattern for the next
Nucleus is an experimental framework. Contributions welcome:
- Test with different models and report results
- Propose new symbol sets for specific domains
- Share successful applications
- Improve theoretical foundations
- Develop tooling and integrations
- Matryoshka - Process documents 100x larger than your LLM's context window
- Ouroboros - An AI vibe-coding game. Can you guide the AI and together build the perfect AI tool?
AGPL 3.0
Copyright 2026 Michael Whitford
If you use Nucleus in your work:
@misc{whitford-nucleus,
title={Nucleus: A Programming Language for AI Cognition},
author={Michael Whitford},
year={2026},
url={https://github.com/michaelwhitford/nucleus}
}- Why Can GPT Learn In-Context?
- What learning algorithm is in-context learning?
- Transformers learn in-context by gradient descent
- Thinking Like Transformers
Influenced by:
- Lambda Calculus (Church, 1936)
- Category Theory (Mac Lane, 1971)
- Self-Reference (Hofstadter, 1979)
- Transformer Architecture (Vaswani et al., 2017)