Replies: 2 comments 1 reply
-
|
Additionally, I’m experiencing instability with the step loss during training. The loss fluctuates significantly, which often results in newer checkpoints performing worse than earlier ones. This makes it difficult to identify the best model without manually testing every version. Do you have any recommendations to stabilize the training and reduce these fluctuations? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
|
you're likely resuming training and trying to change the learning rate. but the optimiser state resumes with the learning rate from the previous run. there's nothing actually preventing (in the codebase) a lower learning rate from being used. i think it's just the resume blocking it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm trying to train a LoRA with a lower learning rate to improve training quality, but SimpleTuner seems to ignore my configuration.
Even when I set a value like 5e-5, the backend still trains at 1e-4.
Everything looks correct in the intermediate configs, but in the UI I can clearly see the Learning Rate displayed as 1.00e-4.
Is there something overriding the value, or am I missing a parameter?
Latest version in the 'Release' branch (v4.1.2) for a LoRa Z Image


Beta Was this translation helpful? Give feedback.
All reactions