Skip to content

Commit f6e4e92

Browse files
committed
get vae in fp32 when using wan.
1 parent 2857a43 commit f6e4e92

File tree

2 files changed

+10
-1
lines changed

2 files changed

+10
-1
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ But it's been growing now! Check out the rest of the README to know more 🤗
1414

1515
**Updates**
1616

17+
🔥 04/03/2025: Support for LTX-Video and Wan in [this PR](https://github.com/sayakpaul/tt-scale-flux/pull/18) 🎬 Check out this section for results and more info.
18+
1719
🔥 01/03/2025: `OpenAIVerifier` was added in [this PR](https://github.com/sayakpaul/tt-scale-flux/pull/16). Specify "openai" in the `name` under `verifier_args`. Thanks to [zhuole1025](https://github.com/zhuole1025) for contributing this!
1820

1921
🔥 27/02/2025: [MaximClouser](https://github.com/MaximClouser) implemented a ComfyUI node for inference-time

main.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,14 @@ def main():
221221

222222
# === Set up the image-generation pipeline ===
223223
torch_dtype = TORCH_DTYPE_MAP[config.pop("torch_dtype")]
224-
pipe = DiffusionPipeline.from_pretrained(pipeline_name, torch_dtype=torch_dtype)
224+
fp_kwargs = {"pretrained_model_name_or_path": pipeline_name, "torch_dtype": torch_dtype}
225+
if "Wan" in pipeline_name:
226+
# As per recommendations from https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan.
227+
from diffusers import AutoencoderKLWan
228+
229+
vae = AutoencoderKLWan.from_pretrained(pipeline_name, subfolder="vae", torch_dtype=torch.float32)
230+
fp_kwargs.update({"vae": vae})
231+
pipe = DiffusionPipeline.from_pretrained(**fp_kwargs)
225232
if not config.get("use_low_gpu_vram", False):
226233
pipe = pipe.to("cuda:0")
227234
pipe.set_progress_bar_config(disable=True)

0 commit comments

Comments
 (0)