Skip to content

Latest commit

 

History

History
149 lines (123 loc) · 16 KB

File metadata and controls

149 lines (123 loc) · 16 KB

Z-Image

Z-Image is an image generation model trained and open-sourced by the Multimodal Interaction Team of Alibaba Tongyi Lab.

Installation

Before using this project for model inference and training, please install DiffSynth-Studio first.

git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .

For more information about installation, please refer to Install Dependencies.

Quick Start

Run the following code to quickly load the Tongyi-MAI/Z-Image-Turbo model and perform inference. FP8 precision quantization causes noticeable image quality degradation, so it is not recommended to enable any quantization on the Z-Image Turbo model. Only CPU Offload is recommended, minimum 8GB VRAM is required to run.

from diffsynth.pipelines.z_image import ZImagePipeline, ModelConfig
import torch

vram_config = {
    "offload_dtype": torch.bfloat16,
    "offload_device": "cpu",
    "onload_dtype": torch.bfloat16,
    "onload_device": "cpu",
    "preparing_dtype": torch.bfloat16,
    "preparing_device": "cuda",
    "computation_dtype": torch.bfloat16,
    "computation_device": "cuda",
}
pipe = ZImagePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="Tongyi-MAI/Z-Image-Turbo", origin_file_pattern="transformer/*.safetensors", **vram_config),
        ModelConfig(model_id="Tongyi-MAI/Z-Image-Turbo", origin_file_pattern="text_encoder/*.safetensors", **vram_config),
        ModelConfig(model_id="Tongyi-MAI/Z-Image-Turbo", origin_file_pattern="vae/diffusion_pytorch_model.safetensors", **vram_config),
    ],
    tokenizer_config=ModelConfig(model_id="Tongyi-MAI/Z-Image-Turbo", origin_file_pattern="tokenizer/"),
    vram_limit=torch.cuda.mem_get_info("cuda")[1] / (1024 ** 3) - 0.5,
)
prompt = "Young Chinese woman in red Hanfu, intricate embroidery. Impeccable makeup, red floral forehead pattern. Elaborate high bun, golden phoenix headdress, red flowers, beads. Holds round folding fan with lady, trees, bird. Neon lightning-bolt lamp (⚡️), bright yellow glow, above extended left palm. Soft-lit outdoor night background, silhouetted tiered pagoda (西安大雁塔), blurred colorful distant lights."
image = pipe(prompt=prompt, seed=42, rand_device="cuda")
image.save("image.jpg")

Model Overview

Model ID Inference Low VRAM Inference Full Training Validation After Full Training LoRA Training Validation After LoRA Training
Tongyi-MAI/Z-Image code code code code code code
DiffSynth-Studio/Z-Image-i2L code code - - - -
Tongyi-MAI/Z-Image-Turbo code code code code code code
PAI/Z-Image-Turbo-Fun-Controlnet-Union-2.1 code code code code code code
PAI/Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps code code code code code code
PAI/Z-Image-Turbo-Fun-Controlnet-Tile-2.1-8steps code code code code code code

Special Training Scripts:

  • Differential LoRA Training: doc, code
  • Trajectory Imitation Distillation Training (Experimental Feature): code

Model Inference

Models are loaded via ZImagePipeline.from_pretrained, see Loading Models.

Input parameters for ZImagePipeline inference include:

  • prompt: Prompt describing the content appearing in the image.
  • negative_prompt: Negative prompt describing content that should not appear in the image, default value is "".
  • cfg_scale: Classifier-free guidance parameter, default value is 1.
  • input_image: Input image for image-to-image generation, used in conjunction with denoising_strength.
  • denoising_strength: Denoising strength, range is 0~1, default value is 1. When the value approaches 0, the generated image is similar to the input image; when the value approaches 1, the generated image differs more from the input image. When input_image parameter is not provided, do not set this to a non-1 value.
  • height: Image height, must be a multiple of 16.
  • width: Image width, must be a multiple of 16.
  • seed: Random seed. Default is None, meaning completely random.
  • rand_device: Computing device for generating random Gaussian noise matrix, default is "cpu". When set to cuda, different GPUs will produce different generation results.
  • num_inference_steps: Number of inference steps, default value is 8.
  • controlnet_inputs: Inputs for ControlNet models.
  • edit_image: Edit images for image editing models, supporting multiple images.
  • positive_only_lora: LoRA weights used only in positive prompts.

If VRAM is insufficient, please enable VRAM Management. We provide recommended low VRAM configurations for each model in the example code, see the table in the "Model Overview" section above.

Model Training

Z-Image series models are uniformly trained through examples/z_image/model_training/train.py, and the script parameters include:

  • General Training Parameters
    • Dataset Basic Configuration
      • --dataset_base_path: Root directory of the dataset.
      • --dataset_metadata_path: Metadata file path of the dataset.
      • --dataset_repeat: Number of times the dataset is repeated in each epoch.
      • --dataset_num_workers: Number of processes for each DataLoader.
      • --data_file_keys: Field names to be loaded from metadata, usually image or video file paths, separated by ,.
    • Model Loading Configuration
      • --model_paths: Paths of models to be loaded. JSON format.
      • --model_id_with_origin_paths: Model IDs with original paths, e.g., "Tongyi-MAI/Z-Image-Turbo:transformer/*.safetensors". Separated by commas.
      • --extra_inputs: Extra input parameters required by the model Pipeline, e.g., extra parameters when training image editing models, separated by ,.
      • --fp8_models: Models loaded in FP8 format, consistent with --model_paths or --model_id_with_origin_paths format. Currently only supports models whose parameters are not updated by gradients (no gradient backpropagation, or gradients only update their LoRA).
    • Training Basic Configuration
      • --learning_rate: Learning rate.
      • --num_epochs: Number of epochs.
      • --trainable_models: Trainable models, e.g., dit, vae, text_encoder.
      • --find_unused_parameters: Whether there are unused parameters in DDP training. Some models contain redundant parameters that do not participate in gradient calculation, and this setting needs to be enabled to avoid errors in multi-GPU training.
      • --weight_decay: Weight decay size, see torch.optim.AdamW.
      • --task: Training task, default is sft. Some models support more training modes, please refer to the documentation of each specific model.
    • Output Configuration
      • --output_path: Model saving path.
      • --remove_prefix_in_ckpt: Remove prefix in the state dict of the model file.
      • --save_steps: Interval of training steps to save the model. If this parameter is left blank, the model is saved once per epoch.
    • LoRA Configuration
      • --lora_base_model: Which model to add LoRA to.
      • --lora_target_modules: Which layers to add LoRA to.
      • --lora_rank: Rank of LoRA.
      • --lora_checkpoint: Path of the LoRA checkpoint. If this path is provided, LoRA will be loaded from this checkpoint.
      • --preset_lora_path: Preset LoRA checkpoint path. If this path is provided, this LoRA will be loaded in the form of being merged into the base model. This parameter is used for LoRA differential training.
      • --preset_lora_model: Model that the preset LoRA is merged into, e.g., dit.
    • Gradient Configuration
      • --use_gradient_checkpointing: Whether to enable gradient checkpointing.
      • --use_gradient_checkpointing_offload: Whether to offload gradient checkpointing to memory.
      • --gradient_accumulation_steps: Number of gradient accumulation steps.
    • Image Width/Height Configuration (Applicable to Image Generation and Video Generation Models)
      • --height: Height of image or video. Leave height and width blank to enable dynamic resolution.
      • --width: Width of image or video. Leave height and width blank to enable dynamic resolution.
      • --max_pixels: Maximum pixel area of image or video frames. When dynamic resolution is enabled, images with resolution larger than this value will be downscaled, and images with resolution smaller than this value will remain unchanged.
  • Z-Image Specific Parameters
    • --tokenizer_path: Path of the tokenizer, applicable to text-to-image models, leave blank to automatically download from remote.

We have built a sample image dataset for your testing. You can download this dataset with the following command:

modelscope download --dataset DiffSynth-Studio/example_image_dataset --local_dir ./data/example_image_dataset

We have written recommended training scripts for each model, please refer to the table in the "Model Overview" section above. For how to write model training scripts, please refer to Model Training; for more advanced training algorithms, please refer to Training Framework Detailed Explanation.

Training Tips:

  • Tongyi-MAI/Z-Image-Turbo is a distilled acceleration model. Therefore, direct training will quickly cause the model to lose its acceleration capability. The effect of inference with "acceleration configuration" (num_inference_steps=8, cfg_scale=1) becomes worse, while the effect of inference with "no acceleration configuration" (num_inference_steps=30, cfg_scale=2) becomes better. The following training and inference schemes can be adopted:
    • Standard SFT Training (code) + No Acceleration Configuration Inference
    • Differential LoRA Training (code) + Acceleration Configuration Inference
    • Standard SFT Training (code) + Trajectory Imitation Distillation Training (code) + Acceleration Configuration Inference
    • Standard SFT Training (code) + Load Distillation Acceleration LoRA During Inference (model) + Acceleration Configuration Inference