Example of output for sample character.
Base Image → Latent Upscale → Inpainting Steps (Face, Skin, Hair, Eyes) → Image Upscale → Remove Background
This project provides a Gradio-based interface for generating consistent character images using ComfyUI. It features a multi-step generation process with specialized controls for character attributes, style transfer, and detail enhancement.
- 🧑🎨 Character-focused generation workflow
- 🎨 Style transfer with reference images
- 🧬 Multi-step detail enhancement (face, skin, hair, eyes)
- 🔍 Florence2-based image captioning (for quick prompt generation)
- 🧑💼 Integrated character manager for organizing presets and traits
- 🖼️ Option to inpaint existing images instead of generating from base
- ♂️ Multi-stage upscaling (latent and final)
- ⚙️ Customizable generation steps
- 💾 Persistent UI state across sessions
- 🖼️ Real-time preview during generation
- Better Process Controller
- Allow users to build the process themselves rather than just toggle steps on/off.
- Queue generations
- Need to setup queueing
- Build UI for queue.
- Allow canceling of queued workflows (Difficult because this interferes with ComfyUI's queueing)
- A more flexible hair inpainting
- Currently hair in-painting keeps to the shape set by the base generation. Rather we want to be able to replace the hair completely.
- Solution is to allow in-painting of hair+background, then paint the parts that are part of the background or body.
- More steps in process (?)
- If anyone has suggestions, open an issue or PR!
- Python 3.8+
- ComfyUI installation
- Gradio
- ComfyScript (See https://github.com/Chaoses-Ib/ComfyScript for installation)
- ComfyUI Impact Pack
- ComfyUI Impact Subpack
- ComfyUI Inspire Pack
- ComfyUI InstantID
- ComfyUI Essentials
- ComfyUI LayerStyle
- ComfyUI LayerStyle Advanced
- Comfyroll Studio
- ComfyUI Neural Network Latent Upscale
- ComfyUI ReActor
- ComfyUI Detail Daemon
- ComfyUI Florence2
- ComfyScript
- Xinsir's ControlNet Union Promax in
ComfyUI/models/controlnet - InstantID model in
ComfyUI/models/instantid - InstantID ControlNet in
ComfyUI/models/controlnet - VIT-H SAM Model in
ComfyUI/models/sams - LCM Lora in
ComfyUI/models/loras - Any of the Turbo Lora in
ComfyUI/models/loras - DPO Turbo Lora in
ComfyUI/models/loras
Note
Make sure to place them in the proper folder. You can rename any of these files, but make sure to update the config with the correct names.
- Navigate to your ComfyUI folder and activate your ComfyUI virtual environment
cd path/to/comfyui
source .venv/bin/activateor
cd path/to/comfyui
.venv/Scripts/activate- Clone this repository:
cd path/to/where/you/want/to/install
git clone https://github.com/Praecordi/comfy-character-app.git
cd comfy-character-app/Note: You don't need to do this in ComfyUI's custom_nodes folder.
- Install dependencies
pip install -r requirements.txt- Create a
config.jsonor copy and rename theconfig.example.json
{
"comfyui_installation": "/path/to/your/comfyui",
"comfyui_server_url": "http://127.0.0.1:8188",
"characters_config": "./characters.json",
"app_constants": {
"union_controlnet": "your_controlnet.safetensors",
"instantid_model": "your_instantid.bin",
"instantid_controlnet": "your_instantid_controlnet.safetensors",
"lcm_lora": "your_lcm_lora.safetensors",
"turbo_lora": "your_turbo_lora.safetensors",
"dpo_turbo_lora": "your_dpo_turbo_lora.safetensors"
}
}Tip
All the values in app_constants are their respective name in ComfyUI. So if your controlnets are in a subdirectory in your controlnet folder, then it. For reference, look at how it is written in a Load ControlNet Model node or similarly for other values.
- Create your character definitions in your
.jsonfile. Seecharacter.example.jsonfile for an example/template.
Run the application
python main.pyThe JSON file defines all your characters. See character.example.json for a template. The required components for each character include:
base: Defining characteristic: "warrior", "man", "woman", "eldritch monster"face: Description of character's faceskin: Description of character's skinhair: Description of character's haireyes: Description of character's eyesface_reference: Path (or paths) to reference face image
Additional attributes can be added which can then be used in the prompt template.
The current generation process (each of which can be toggled) includes:
- Base Generation
- Iterative Latent Upscale
- Face Detail Enhancement
- Skin Detail Enhancement
- Hair Detail Enhancement
- Eyes Detail Enhancement
- Image Upscale
- Background Removal
Note
The app currently only works with the SDXL architecture. It also assumes that sdxl checkpoints are placed in a folder called sdxl/ and pony checkpoints in a folder called pony/ in your models folder.
Furthermore, it detects whether the checkpoint is a Lightning, Hyper, Turbo model using its name. If the checkpoint contains "Lightning", "Hyper4S" (4-step Hyper models), "Hyper8S" (8-step Hyper models) or "Turbo" then it will be handled differently.
Warning
I've only been able to test this on Ubuntu
The Character Manager tab provides a UI-based editor for managing your character JSON configurations. From this tab, you can:
- View and switch between characters
- Edit prompts and attributes
- Assign or preview reference images
- Save/load configurations persistently
Changes made through the Character Manager are reflected in the current session and saved to the character config file. This dramatically speeds up iteration, especially when balancing multiple characters across a dataset.
Contributions are welcome! Please open an issue or pull request for any improvements!


