Local-first AI writing assistant with story structure + chatbot + image prompt support.
- You are the author in the driver seat: your story is your story, and the AI is a creative partner (from brainstorm buddy to ghostwriter-style assistant) that supports your voice and choices.
- Join the community: r/AugmentedQuill
- Clone repository and create Python environment
git clone https://github.com/StableLlamaAI/AugmentedQuill.gitcd AugmentedQuillpython -m venv venv && source venv/bin/activate
- Install dependencies
python -m pip install -e ".[dev]"
- Build frontend
cd src/frontend && npm install && npm run build
- Run backend
- Default:
augmentedquill --reload --host 127.0.0.1 --port 8000
- Default:
- Run frontend dev server
cd src/frontend && npm run dev- Default proxy target: backend on 8000
- Override with environment variable (if you use a different backend port):
VITE_BACKEND_URL=http://127.0.0.1:<your-port> npm run dev
- Open
http://127.0.0.1:5173(vite dev)http://127.0.0.1:8000/(production mode)
- Ensure your OpenAI-compatible API provider endpoint is running and reachable (local
llama.cpp/Ollama endpoint or cloud OpenAI-compatible endpoint), and enter the key/URL in Settings before creating your first project. - Talk to Writing Partner (AI chat)
- Create a project - or let the Writing Partner do it for you
- Add sourcebook entries - or let the Writing Partner do it for you
- Add chapters / short story content - or let the Writing Partner do it for you
- (Optional) Open Images panel and use prompt generator to create images in external tools
The complete user guide is in docs/user_manual/:
- Getting started
- Projects and settings
- Writing interface
- Chapters and books
- Sourcebook
- Project images
- AI chat assistant
- Appearance and display
- First story tutorial
- Writing a story
- Troubleshooting
Tip: Start with
01_getting_started.md, then03_writing_interface.md.
- Project-based story authoring (short story, novel, series)
- Multi-chapter and multi-book structure
- Live AI writing assistant and chat (local API key / OpenAI-compatible endpoints)
- Custom prompt pipelines (editor, writer, chat voices)
- Sourcebook (characters, scenes, lore, items, etc.)
- Image metadata + optimized image prompt generation
- Config-driven with JSON templates and env overrides
- Auto-captured project artifacts in
data/projects
- Local-first app. No built-in auth. Do not expose to public internet without reverse proxy + access control.
- Security model: single-user local use.
- Browser-based LLM calls may require CORS-friendly endpoints or use internal proxy route
/api/v1/openai/models. - AugmentedQuill does not include an LLM server; you must point it at an OpenAI-compatible API endpoint (self-hosted or cloud). For local use, set up a compatible host such as
llama.cppendpoints,Ollama, or another OpenAI API compliant server. - For easier setup and releases, try the official Electron or Docker builds provided with each release instead of building from source.
- Backend:
src/augmentedquill/ - Frontend:
src/frontend/ - Integration artifacts:
static/anddata/ - Tests:
tests/unit/ - Config schemas:
resources/schemas/
- Backend lint/test
ruff check .black --check .python -m pytest
- Frontend:
cd src/frontend && npm run lint && npm run test && npm run build - Accessibility validation:
cd src/frontend && npm run test:accessibility(new) - Quick run:
augmentedquill --reload --host 127.0.0.1 --port 28000
Runtime config:
data/config/machine.jsondata/config/story.jsondata/config/projects.json
Model endpoint variables:
OPENAI_API_KEYOPENAI_BASE_URLOPENAI_MODELOPENAI_TIMEOUT_S
- Run
tools/enforce_code_hygiene.py .after code changes. - Run
tools/check_copyright.py .. - Keep
data/projects/anddata/logs/names safe by settingAUGQ_USER_DATA_DIRin test runs.
docs/ARCHITECTURE.mddocs/ORGANIZATION.mdCONTRIBUTING.mdLICENSE(GPLv3)
- No multi-user access controls.
- No real-time external editor sync.
AugmentedQuill natively supports multiple languages for both the application interface (GUI) and your story formatting. Use Settings > General > GUI Language to configure the application locale, and the language will automatically be respected.

