Skip to content

Commit 160a6f9

Browse files
author
Aaron J Clifft
committed
hopefully fixed the markdown in readme
1 parent a2eb190 commit 160a6f9

File tree

1 file changed

+69
-64
lines changed

1 file changed

+69
-64
lines changed

README.md

Lines changed: 69 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -1,96 +1,101 @@
1-
# **🚀 Astral-Drafter**
1+
# 🚀 Astral-Drafter
22

3-
\<div align="center"\>
3+
<div align="center">
44

55
*A lean, local-first drafting tool for creative writers, powered by a custom GUI and a high-performance C++ inference engine.*
66

7-
Getting Started •
7+
[![GitHub license](https://img.shields.io/github/license/YOUR_GITHUB_USERNAME/Astral-Drafter)](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/blob/main/LICENSE)
8+
[![GitHub stars](https://img.shields.io/github/stars/YOUR_GITHUB_USERNAME/Astral-Drafter?style=social)](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/stargazers)
9+
[![GitHub forks](https://img.shields.io/github/forks/YOUR_GITHUB_USERNAME/Astral-Drafter?style=social)](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/network/members)
10+
[![GitHub issues](https://img.shields.io/github/issues/YOUR_GITHUB_USERNAME/Astral-Drafter)](https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter/issues)
811

9-
Tech Stack •
12+
[Getting Started](#-installation--setup)
13+
[Tech Stack](#-technology-stack--workflow)
14+
[Usage](#-usage)
15+
[Acknowledgements](#-acknowledgements)
1016

11-
Usage •
17+
</div>
1218

13-
Acknowledgements
14-
15-
\</div\>
16-
17-
## **📋 Overview**
19+
## 📋 Overview
1820

1921
**Astral-Drafter** is a purpose-built, local-first application designed to accelerate the creative writing process. It combines a minimalist web-based GUI with a high-speed, locally-run LLM server, providing a powerful and private environment for drafting prose.
2022

2123
This tool was created to overcome the limitations and overhead of generic AI tools, offering a streamlined workflow for writers who need maximum control and performance. The system is designed to handle very large contexts (64k+), allowing for entire scenes and character notes to be processed for superior narrative consistency.
2224

23-
### **Key Features (v0.1)**
25+
### Key Features (v0.1)
2426

25-
* **📝 Purpose-Built UI**: A clean, single-page web interface for pasting context, outlines, and character sheets.
26-
* **🚀 High-Speed Generation**: Leverages llama.cpp for native performance and GPU acceleration.
27-
* **💾 Auto-Save to File**: Generated prose is automatically saved to a user-specified file path.
28-
* **🔒 100% Local & Private**: No data ever leaves your machine.
29-
* **👆 One-Click Launch**: A simple batch script starts all necessary components.
30-
* **💬 Conversational Editing**: After the initial draft, you can provide follow-up instructions to refine and rewrite the text.
27+
- **📝 Purpose-Built UI**: A clean, single-page web interface for pasting context, outlines, and character sheets.
28+
- **🚀 High-Speed Generation**: Leverages `llama.cpp` for native performance and GPU acceleration.
29+
- **💾 Auto-Save to File**: Generated prose is automatically saved to a user-specified file path.
30+
- **🔒 100% Local & Private**: No data ever leaves your machine.
31+
- **👆 One-Click Launch**: A simple batch script starts all necessary components.
32+
- **💬 Conversational Editing**: After the initial draft, you can provide follow-up instructions to refine and rewrite the text.
3133

32-
## **✨ Screenshot (v0.1)**
34+
## ✨ Screenshot (v0.1)
3335

34-
<img src="./assets/Astral_Drafter_GUI.png" alt="Screen shot of GUI" width="200">
36+
![Astral Drafter GUI](./assets/Astral_Draft_GUI.png)
3537

36-
## **⚙️ Technology Stack & Workflow**
38+
## ⚙️ Technology Stack & Workflow
3739

3840
This project is built on a lean, high-performance stack, ensuring maximum efficiency by avoiding heavy frameworks and communicating directly with a native inference engine.
3941

40-
* **Inference Engine: llama.cpp Server**
41-
* Runs quantized GGUF models (e.g., Mistral-Nemo @ 64k context).
42-
* Provides near-native speed via C++ and GPU offloading (\--n-gpu-layers).
43-
* Exposes an OpenAI-compatible API endpoint for easy integration.
44-
* **Backend Bridge: Custom Python Server (llama\_cpp\_server\_bridge.py)**
45-
* Built with Python 3's native http.server for zero external framework bloat.
46-
* Acts as middleware, receiving requests from the GUI and communicating with the llama.cpp server.
47-
* Handles all file system operations (creating directories, writing/overwriting scene files).
48-
* **Frontend GUI: Single-File Web App (astral\_nexus\_drafter.html)**
49-
* Vanilla HTML, CSS, and JavaScript, ensuring no complex build steps are needed.
50-
* Styled with [Tailwind CSS](https://tailwindcss.com/) for a modern, responsive UI.
51-
* Communicates directly with the Python bridge server.
52-
* **Launcher: Windows Batch Script (launch\_astral\_drafter.bat)**
53-
* Provides a "one-click" desktop experience.
54-
* Automates the startup of both the llama.cpp and Python bridge servers, then launches the GUI in the default browser.
42+
- **Inference Engine: `llama.cpp` Server**
43+
- Runs quantized GGUF models (e.g., Mistral-Nemo @ 64k context).
44+
- Provides near-native speed via C++ and GPU offloading (`--n-gpu-layers`).
45+
- Exposes an OpenAI-compatible API endpoint for easy integration.
46+
47+
- **Backend Bridge: Custom Python Server (`llama_cpp_server_bridge.py`)**
48+
- Built with Python 3's native `http.server` for zero external framework bloat.
49+
- Acts as middleware, receiving requests from the GUI and communicating with the `llama.cpp` server.
50+
- Handles all file system operations (creating directories, writing/overwriting scene files).
5551

56-
## **🚀 Installation & Setup**
52+
- **Frontend GUI: Single-File Web App (`gui/astral_nexus_drafter.html`)**
53+
- Vanilla HTML, CSS, and JavaScript, ensuring no complex build steps are needed.
54+
- Styled with [Tailwind CSS](https://tailwindcss.com/) for a modern, responsive UI.
55+
- Communicates directly with the Python bridge server.
5756

58-
### **Prerequisites**
57+
- **Launcher: Windows Batch Script (`launch_astral_drafter.bat`)**
58+
- Provides a "one-click" desktop experience.
59+
- Automates the startup of both the `llama.cpp` and Python bridge servers, then launches the GUI in the default browser.
5960

60-
* Windows Operating System
61-
* Python 3.8+ installed
62-
* Git for cloning the repository
63-
* A pre-compiled version of llama.cpp's server.exe.
64-
* A GGUF-formatted LLM file (e.g., Mistral-Nemo).
61+
## 🚀 Installation & Setup
6562

66-
### **Installation**
63+
### Prerequisites
6764

68-
1. **Clone your repository**
65+
- Windows Operating System
66+
- Python 3.8+ installed
67+
- Git for cloning the repository
68+
- A pre-compiled version of `llama.cpp`'s `server.exe`.
69+
- A GGUF-formatted LLM file (e.g., Mistral-Nemo).
6970

70-
\# Replace YOUR\_GITHUB\_USERNAME with your actual GitHub username
71-
git clone \[https://github.com/YOUR\_GITHUB\_USERNAME/Astral-Drafter.git\](https://github.com/YOUR\_GITHUB\_USERNAME/Astral-Drafter.git)
72-
cd Astral-Drafter
71+
### Installation
7372

74-
2.
75-
3. **Install Python Dependencies**
73+
1. **Clone your repository**
74+
```bash
75+
# Replace YOUR_GITHUB_USERNAME with your actual GitHub username
76+
git clone https://github.com/YOUR_GITHUB_USERNAME/Astral-Drafter.git
77+
cd Astral-Drafter
78+
```
7679

77-
pip install \-r requirements.txt
80+
2. **Install Python Dependencies**
81+
```bash
82+
pip install -r requirements.txt
83+
```
7884

79-
4.
80-
5. **Configure the Launcher**
81-
* Open launch\_astral\_drafter.bat in a text editor.
82-
* Update the placeholder paths at the top of the file to point to your llama.cpp directory, your model file, and this project's directory.
85+
3. **Configure the Launcher**
86+
- Open `launch_astral_drafter.bat` in a text editor.
87+
- Update the placeholder paths at the top of the file to point to your `llama.cpp` directory, your model file, and this project's directory.
8388
84-
## **🖱️ Usage**
89+
## 🖱️ Usage
8590
86-
1. Double-click the launch\_astral\_drafter.bat file (or a desktop shortcut pointing to it).
87-
2. Two terminal windows will open for the servers, and the GUI will launch in your browser.
88-
3. In the GUI, paste your context, outline, and character sheets into the text boxes on the left.
89-
4. Specify an absolute file path for the output (e.g., D:\\Novels\\scene\_03.txt).
90-
5. Click **"Start Scene"** to generate the first draft.
91-
6. Once generated, use the chat input at the bottom to provide editing instructions. Each new generation will overwrite the file.
92-
7. When finished, click the red **Shutdown** button in the GUI to close both server windows cleanly.
91+
1. Double-click the `launch_astral_drafter.bat` file (or a desktop shortcut pointing to it).
92+
2. Two terminal windows will open for the servers, and the GUI will launch in your browser.
93+
3. In the GUI, paste your context, outline, and character sheets into the text boxes on the left.
94+
4. Specify an absolute file path for the output (e.g., `D:\Novels\scene_03.txt`).
95+
5. Click **"Start Scene"** to generate the first draft.
96+
6. Once generated, use the chat input at the bottom to provide editing instructions. Each new generation will overwrite the file.
97+
7. When finished, click the red **Shutdown** button in the GUI to close both server windows cleanly.
9398
94-
## **🙏 Acknowledgements**
99+
## 🙏 Acknowledgements
95100
96-
This project was built on the foundation of the excellent [mcp-ollama\_server](https://www.google.com/search?q=https://github.com/sethuram2003/mcp-ollama_server) by Sethuram. While this project has since been adapted to communicate directly with a llama.cpp server, the initial modular concept provided the inspiration.
101+
This project was built on the foundation of the excellent [mcp-ollama_server](https://github.com/sethuram2003/mcp-ollama_server) by Sethuram. While this project has since been adapted to communicate directly with a `llama.cpp` server, the initial modular concept provided the inspiration.

0 commit comments

Comments
 (0)