|
1 | | -# **🚀 Astral-Drafter** |
| 1 | +# 🚀 Astral-Drafter |
2 | 2 |
|
3 | | -\<div align="center"\> |
| 3 | +<div align="center"> |
4 | 4 |
|
5 | 5 | *A lean, local-first drafting tool for creative writers, powered by a custom GUI and a high-performance C++ inference engine.* |
6 | 6 |
|
7 | | -Getting Started • |
| 7 | +[](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/blob/main/LICENSE) |
| 8 | +[](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/stargazers) |
| 9 | +[](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/network/members) |
| 10 | +[](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/issues) |
8 | 11 |
|
9 | | -Tech Stack • |
| 12 | +[Getting Started](#-installation--setup) • |
| 13 | +[Tech Stack](#-technology-stack--workflow) • |
| 14 | +[Usage](#-usage) • |
| 15 | +[Acknowledgements](#-acknowledgements) |
10 | 16 |
|
11 | | -Usage • |
| 17 | +</div> |
12 | 18 |
|
13 | | -Acknowledgements |
14 | | - |
15 | | -\</div\> |
16 | | - |
17 | | -## **📋 Overview** |
| 19 | +## 📋 Overview |
18 | 20 |
|
19 | 21 | **Astral-Drafter** is a purpose-built, local-first application designed to accelerate the creative writing process. It combines a minimalist web-based GUI with a high-speed, locally-run LLM server, providing a powerful and private environment for drafting prose. |
20 | 22 |
|
21 | 23 | This tool was created to overcome the limitations and overhead of generic AI tools, offering a streamlined workflow for writers who need maximum control and performance. The system is designed to handle very large contexts (64k+), allowing for entire scenes and character notes to be processed for superior narrative consistency. |
22 | 24 |
|
23 | | -### **Key Features (v0.1)** |
| 25 | +### Key Features (v0.1) |
24 | 26 |
|
25 | | -* **📝 Purpose-Built UI**: A clean, single-page web interface for pasting context, outlines, and character sheets. |
26 | | -* **🚀 High-Speed Generation**: Leverages llama.cpp for native performance and GPU acceleration. |
27 | | -* **💾 Auto-Save to File**: Generated prose is automatically saved to a user-specified file path. |
28 | | -* **🔒 100% Local & Private**: No data ever leaves your machine. |
29 | | -* **👆 One-Click Launch**: A simple batch script starts all necessary components. |
30 | | -* **💬 Conversational Editing**: After the initial draft, you can provide follow-up instructions to refine and rewrite the text. |
| 27 | +- **📝 Purpose-Built UI**: A clean, single-page web interface for pasting context, outlines, and character sheets. |
| 28 | +- **🚀 High-Speed Generation**: Leverages `llama.cpp` for native performance and GPU acceleration. |
| 29 | +- **💾 Auto-Save to File**: Generated prose is automatically saved to a user-specified file path. |
| 30 | +- **🔒 100% Local & Private**: No data ever leaves your machine. |
| 31 | +- **👆 One-Click Launch**: A simple batch script starts all necessary components. |
| 32 | +- **💬 Conversational Editing**: After the initial draft, you can provide follow-up instructions to refine and rewrite the text. |
31 | 33 |
|
32 | | -## **✨ Screenshot (v0.1)** |
| 34 | +## ✨ Screenshot (v0.1) |
33 | 35 |
|
34 | | -<img src="./assets/Astral_Drafter_GUI.png" alt="Screen shot of GUI" width="200"> |
| 36 | + |
35 | 37 |
|
36 | | -## **⚙️ Technology Stack & Workflow** |
| 38 | +## ⚙️ Technology Stack & Workflow |
37 | 39 |
|
38 | 40 | This project is built on a lean, high-performance stack, ensuring maximum efficiency by avoiding heavy frameworks and communicating directly with a native inference engine. |
39 | 41 |
|
40 | | -* **Inference Engine: llama.cpp Server** |
41 | | - * Runs quantized GGUF models (e.g., Mistral-Nemo @ 64k context). |
42 | | - * Provides near-native speed via C++ and GPU offloading (\--n-gpu-layers). |
43 | | - * Exposes an OpenAI-compatible API endpoint for easy integration. |
44 | | -* **Backend Bridge: Custom Python Server (llama\_cpp\_server\_bridge.py)** |
45 | | - * Built with Python 3's native http.server for zero external framework bloat. |
46 | | - * Acts as middleware, receiving requests from the GUI and communicating with the llama.cpp server. |
47 | | - * Handles all file system operations (creating directories, writing/overwriting scene files). |
48 | | -* **Frontend GUI: Single-File Web App (astral\_nexus\_drafter.html)** |
49 | | - * Vanilla HTML, CSS, and JavaScript, ensuring no complex build steps are needed. |
50 | | - * Styled with [Tailwind CSS](https://tailwindcss.com/) for a modern, responsive UI. |
51 | | - * Communicates directly with the Python bridge server. |
52 | | -* **Launcher: Windows Batch Script (launch\_astral\_drafter.bat)** |
53 | | - * Provides a "one-click" desktop experience. |
54 | | - * Automates the startup of both the llama.cpp and Python bridge servers, then launches the GUI in the default browser. |
| 42 | +- **Inference Engine: `llama.cpp` Server** |
| 43 | + - Runs quantized GGUF models (e.g., Mistral-Nemo @ 64k context). |
| 44 | + - Provides near-native speed via C++ and GPU offloading (`--n-gpu-layers`). |
| 45 | + - Exposes an OpenAI-compatible API endpoint for easy integration. |
| 46 | + |
| 47 | +- **Backend Bridge: Custom Python Server (`llama_cpp_server_bridge.py`)** |
| 48 | + - Built with Python 3's native `http.server` for zero external framework bloat. |
| 49 | + - Acts as middleware, receiving requests from the GUI and communicating with the `llama.cpp` server. |
| 50 | + - Handles all file system operations (creating directories, writing/overwriting scene files). |
55 | 51 |
|
56 | | -## **🚀 Installation & Setup** |
| 52 | +- **Frontend GUI: Single-File Web App (`gui/astral_nexus_drafter.html`)** |
| 53 | + - Vanilla HTML, CSS, and JavaScript, ensuring no complex build steps are needed. |
| 54 | + - Styled with [Tailwind CSS](https://tailwindcss.com/) for a modern, responsive UI. |
| 55 | + - Communicates directly with the Python bridge server. |
57 | 56 |
|
58 | | -### **Prerequisites** |
| 57 | +- **Launcher: Windows Batch Script (`launch_astral_drafter.bat`)** |
| 58 | + - Provides a "one-click" desktop experience. |
| 59 | + - Automates the startup of both the `llama.cpp` and Python bridge servers, then launches the GUI in the default browser. |
59 | 60 |
|
60 | | -* Windows Operating System |
61 | | -* Python 3.8+ installed |
62 | | -* Git for cloning the repository |
63 | | -* A pre-compiled version of llama.cpp's server.exe. |
64 | | -* A GGUF-formatted LLM file (e.g., Mistral-Nemo). |
| 61 | +## 🚀 Installation & Setup |
65 | 62 |
|
66 | | -### **Installation** |
| 63 | +### Prerequisites |
67 | 64 |
|
68 | | -1. **Clone your repository** |
| 65 | +- Windows Operating System |
| 66 | +- Python 3.8+ installed |
| 67 | +- Git for cloning the repository |
| 68 | +- A pre-compiled version of `llama.cpp`'s `server.exe`. |
| 69 | +- A GGUF-formatted LLM file (e.g., Mistral-Nemo). |
69 | 70 |
|
70 | | -\# Replace YOUR\_GITHUB\_USERNAME with your actual GitHub username |
71 | | -git clone \[https://github.com/YOUR\_GITHUB\_USERNAME/Astral-Drafter.git\](https://github.com/YOUR\_GITHUB\_USERNAME/Astral-Drafter.git) |
72 | | -cd Astral-Drafter |
| 71 | +### Installation |
73 | 72 |
|
74 | | -2. |
75 | | -3. **Install Python Dependencies** |
| 73 | +1. **Clone your repository** |
| 74 | + ```bash |
| 75 | + # Replace YOUR_GITHUB_USERNAME with your actual GitHub username |
| 76 | + git clone https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter.git |
| 77 | + cd Astral-Drafter |
| 78 | + ``` |
76 | 79 |
|
77 | | -pip install \-r requirements.txt |
| 80 | +2. **Install Python Dependencies** |
| 81 | + ```bash |
| 82 | + pip install -r requirements.txt |
| 83 | + ``` |
78 | 84 |
|
79 | | -4. |
80 | | -5. **Configure the Launcher** |
81 | | - * Open launch\_astral\_drafter.bat in a text editor. |
82 | | - * Update the placeholder paths at the top of the file to point to your llama.cpp directory, your model file, and this project's directory. |
| 85 | +3. **Configure the Launcher** |
| 86 | + - Open `launch_astral_drafter.bat` in a text editor. |
| 87 | + - Update the placeholder paths at the top of the file to point to your `llama.cpp` directory, your model file, and this project's directory. |
83 | 88 |
|
84 | | -## **🖱️ Usage** |
| 89 | +## 🖱️ Usage |
85 | 90 |
|
86 | | -1. Double-click the launch\_astral\_drafter.bat file (or a desktop shortcut pointing to it). |
87 | | -2. Two terminal windows will open for the servers, and the GUI will launch in your browser. |
88 | | -3. In the GUI, paste your context, outline, and character sheets into the text boxes on the left. |
89 | | -4. Specify an absolute file path for the output (e.g., D:\\Novels\\scene\_03.txt). |
90 | | -5. Click **"Start Scene"** to generate the first draft. |
91 | | -6. Once generated, use the chat input at the bottom to provide editing instructions. Each new generation will overwrite the file. |
92 | | -7. When finished, click the red **Shutdown** button in the GUI to close both server windows cleanly. |
| 91 | +1. Double-click the `launch_astral_drafter.bat` file (or a desktop shortcut pointing to it). |
| 92 | +2. Two terminal windows will open for the servers, and the GUI will launch in your browser. |
| 93 | +3. In the GUI, paste your context, outline, and character sheets into the text boxes on the left. |
| 94 | +4. Specify an absolute file path for the output (e.g., `D:\Novels\scene_03.txt`). |
| 95 | +5. Click **"Start Scene"** to generate the first draft. |
| 96 | +6. Once generated, use the chat input at the bottom to provide editing instructions. Each new generation will overwrite the file. |
| 97 | +7. When finished, click the red **Shutdown** button in the GUI to close both server windows cleanly. |
93 | 98 |
|
94 | | -## **🙏 Acknowledgements** |
| 99 | +## 🙏 Acknowledgements |
95 | 100 |
|
96 | | -This project was built on the foundation of the excellent [mcp-ollama\_server](https://www.google.com/search?q=https://github.com/sethuram2003/mcp-ollama_server) by Sethuram. While this project has since been adapted to communicate directly with a llama.cpp server, the initial modular concept provided the inspiration. |
| 101 | +This project was built on the foundation of the excellent [mcp-ollama_server](https://github.com/sethuram2003/mcp-ollama_server) by Sethuram. While this project has since been adapted to communicate directly with a `llama.cpp` server, the initial modular concept provided the inspiration. |
0 commit comments