NRR-IME provides an interface-structure package for ambiguity-preserving inference under stateless LLM API constraints. The engineering objective is to reduce premature commitment in LLM decoding and limit downstream rework caused by semantic collapse. The control policy is defer vs commit under explicit conditions: maintain compatible alternatives while evidence is weak, then commit when action boundaries are clear. This repository includes the current manuscript snapshot and reproducibility assets for benchmarking which interface decomposition yields stable state transitions, with explicit protocol constraints and condition-bounded claims.
Quick links
- Manuscript snapshot:
manuscript/current/paper4-nrr-ime-v67.tex/paper4-nrr-ime-v67.pdf - Positioning (NRR vs related approaches)
- Search Keywords and Weekly Rank Log
EN/JA query terms
early commitment=早期確定ambiguity-preserving inference=曖昧性保持推論
Part of the Non-Resolution Reasoning (NRR) research program.
For the cross-paper map and current series links, start here:
NRR is not an anti-LLM framework.
NRR does not replace standard LLM use.
NRR optimizes when to commit and when to defer, under explicit conditions.
Series numbering policy: paper3 is permanently skipped and never reused.
Current manuscript snapshot:
manuscript/current/paper4-nrr-ime-v67.texmanuscript/current/paper4-nrr-ime-v67.pdfmanuscript/current/fig1_design_space.pngmanuscript/current/fig2_bangbang.pngmanuscript/current/fig3_comparative.pngmanuscript/current/fig4_stability.pngmanuscript/current/checksums_sha256.txt
Publication posting timeline may differ by platform availability.
nrr-ime/
|-- README.md
|-- LICENSE
|-- requirements.txt
|-- reproducibility.md
|-- manuscript/
| `-- current/
| |-- paper4-nrr-ime-v67.tex
| |-- paper4-nrr-ime-v67.pdf
| |-- fig1_design_space.png
| |-- fig2_bangbang.png
| |-- fig3_comparative.png
| |-- fig4_stability.png
| `-- checksums_sha256.txt
|-- experiments/
| |-- experimental_data.json
| |-- generate_figures.py
| |-- paper4_crossmodel_v5.ipynb
| |-- phase_comparison.py
| `-- scaling_validation.py
`-- .gitignore
pip install -r requirements.txt
bash scripts/generate_manuscript_figures.sh
python3 experiments/phase_comparison.py
python3 experiments/scaling_validation.pyThe bundled experiments/experimental_data.json is the merged Paper 4 experiment
dataset used for the current manuscript line. It includes the 135 run-level records
used by the repository comparison scripts. Full provider-side infrastructure logs are
not bundled in this snapshot.
experiments/generate_figures.py regenerates the four manuscript figure PNGs from
the bundled merged dataset. The stable wrapper writes to a temp output directory by
default so the tracked review package remains unchanged.
The main experiment implementation used for this dataset is bundled as
experiments/paper4_crossmodel_v5.ipynb.
See reproducibility.md for fixed settings and artifact mapping.
Stable review-package entrypoints:
bash scripts/generate_manuscript_figures.shbash scripts/build_current_manuscript.shbash scripts/verify_current_package.sh
- https://github.com/kei-saito-research/nrr-core
- https://github.com/kei-saito-research/nrr-phi
- https://github.com/kei-saito-research/nrr-transfer
I support written technical Q&A, concept clarification, and small evaluation design.
Typical flow:
- you send questions and context,
- I return a structured technical response,
- if needed, I provide an English-ready version for external sharing.
Scope: research interpretation and evaluation planning.
Out of scope: production integration, implementation outsourcing, ongoing operations, and SLA/deadline commitments.
Contact: kei.saito.research@gmail.com
CC BY 4.0. See LICENSE.