Skip to content

Praecordi/comfy-character-app

Repository files navigation

Character Generation Workflow for ComfyUI with ComfyScript

Output Example Example of output for sample character.

Base Image → Latent Upscale → Inpainting Steps (Face, Skin, Hair, Eyes) → Image Upscale → Remove Background

This project provides a Gradio-based interface for generating consistent character images using ComfyUI. It features a multi-step generation process with specialized controls for character attributes, style transfer, and detail enhancement.

Key Features

UI Screenshot with Preview Output UI with live preview pane
Options UI Available options for generation
Character Manager UI In-build character manager
  • 🧑‍🎨 Character-focused generation workflow
  • 🎨 Style transfer with reference images
  • 🧬 Multi-step detail enhancement (face, skin, hair, eyes)
  • 🔍 Florence2-based image captioning (for quick prompt generation)
  • 🧑‍💼 Integrated character manager for organizing presets and traits
  • 🖼️ Option to inpaint existing images instead of generating from base
  • ‍♂️ Multi-stage upscaling (latent and final)
  • ⚙️ Customizable generation steps
  • 💾 Persistent UI state across sessions
  • 🖼️ Real-time preview during generation

To-Do / Upcoming

  • Better Process Controller
    • Allow users to build the process themselves rather than just toggle steps on/off.
  • Queue generations
    • Need to setup queueing
    • Build UI for queue.
    • Allow canceling of queued workflows (Difficult because this interferes with ComfyUI's queueing)
  • A more flexible hair inpainting
    • Currently hair in-painting keeps to the shape set by the base generation. Rather we want to be able to replace the hair completely.
    • Solution is to allow in-painting of hair+background, then paint the parts that are part of the background or body.
  • More steps in process (?)
    • If anyone has suggestions, open an issue or PR!

Requirements

Custom Nodes

Required Models

Note

Make sure to place them in the proper folder. You can rename any of these files, but make sure to update the config with the correct names.

Installation

  1. Navigate to your ComfyUI folder and activate your ComfyUI virtual environment
cd path/to/comfyui
source .venv/bin/activate

or

cd path/to/comfyui
.venv/Scripts/activate
  1. Clone this repository:
cd path/to/where/you/want/to/install
git clone https://github.com/Praecordi/comfy-character-app.git
cd comfy-character-app/

Note: You don't need to do this in ComfyUI's custom_nodes folder.

  1. Install dependencies
pip install -r requirements.txt
  1. Create a config.json or copy and rename the config.example.json
{
  "comfyui_installation": "/path/to/your/comfyui",
  "comfyui_server_url": "http://127.0.0.1:8188",
  "characters_config": "./characters.json",
  "app_constants": {
    "union_controlnet": "your_controlnet.safetensors",
    "instantid_model": "your_instantid.bin",
    "instantid_controlnet": "your_instantid_controlnet.safetensors",
    "lcm_lora": "your_lcm_lora.safetensors",
    "turbo_lora": "your_turbo_lora.safetensors",
    "dpo_turbo_lora": "your_dpo_turbo_lora.safetensors"
  }
}

Tip

All the values in app_constants are their respective name in ComfyUI. So if your controlnets are in a subdirectory in your controlnet folder, then it. For reference, look at how it is written in a Load ControlNet Model node or similarly for other values.

  1. Create your character definitions in your .json file. See character.example.json file for an example/template.

Usage

Run the application

python main.py

Character Configuration

The JSON file defines all your characters. See character.example.json for a template. The required components for each character include:

  • base: Defining characteristic: "warrior", "man", "woman", "eldritch monster"
  • face: Description of character's face
  • skin: Description of character's skin
  • hair: Description of character's hair
  • eyes: Description of character's eyes
  • face_reference: Path (or paths) to reference face image

Additional attributes can be added which can then be used in the prompt template.

Workflow Steps

The current generation process (each of which can be toggled) includes:

  1. Base Generation
  2. Iterative Latent Upscale
  3. Face Detail Enhancement
  4. Skin Detail Enhancement
  5. Hair Detail Enhancement
  6. Eyes Detail Enhancement
  7. Image Upscale
  8. Background Removal

Note

The app currently only works with the SDXL architecture. It also assumes that sdxl checkpoints are placed in a folder called sdxl/ and pony checkpoints in a folder called pony/ in your models folder.

Furthermore, it detects whether the checkpoint is a Lightning, Hyper, Turbo model using its name. If the checkpoint contains "Lightning", "Hyper4S" (4-step Hyper models), "Hyper8S" (8-step Hyper models) or "Turbo" then it will be handled differently.

Warning

I've only been able to test this on Ubuntu

Character Manager

The Character Manager tab provides a UI-based editor for managing your character JSON configurations. From this tab, you can:

  • View and switch between characters
  • Edit prompts and attributes
  • Assign or preview reference images
  • Save/load configurations persistently

Changes made through the Character Manager are reflected in the current session and saved to the character config file. This dramatically speeds up iteration, especially when balancing multiple characters across a dataset.

Contributing

Contributions are welcome! Please open an issue or pull request for any improvements!

About

A ComfyUI and ComfyScript Gradio-based app for generating characters using a multi-step process.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors