Inquiry about support/memory usage when LoRA fine-tuning Wan2.1-14B #1442
Replies: 1 comment
-
|
ok, we have FSDP2, DeepSpeed, musubi block swap, RamTorch, and Diffusers' group offload available. some are compatible with torch inductor too. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, nice work! I want to use multiple GPUs or multiple nodes for WAN2.1- 14 B LoRA training, and I want to check SimpleTuner's support for multi-GPU/multi-node LoRA training of WAN, as well as the memory consumption. This would be very helpful. Many thanks!
Beta Was this translation helpful? Give feedback.
All reactions