Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions paddlenlp/trainer/training_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,7 @@
enable_stage1_allgather_overlap, overlap stage1 V2 allgather with next step forward computation. There are some constraints for the overlap, such as the logging_step should be bigger than 1 for allgather overlap forward compute and no other sync could be called during the training for allgather overlap.
disable_stage1_reduce_avg, replace reduce_avg with original reduce_sum+scale in stage1, which can be used for accuracy verification.
enable_release_graHEADds, reduce peak memory usage by releasing gradients after each iteration. The creation of gradients will be postponed until backward propagation of the next iteration.
enable_fuse_optimizer_states, fuse optimizer states to a single storage.
recompute (`bool`, *optional*, defaults to `False`):
Recompute the forward pass to calculate gradients. Used for saving memory.
Only support for networks with transformer blocks.
Expand Down Expand Up @@ -1288,10 +1289,11 @@
"enable_stage1_broadcast_overlap",
"enable_stage1_allgather_overlap",
"enable_release_grads",
"enable_fuse_optimizer_states",
]:
raise ValueError(
f"Found unknown pipeline mode config {x}, "
f"accpet config is enable_stage1_tensor_fusion, enable_stage1_overlap, enable_stage2_overlap, split_param, disable_stage1_reduce_avg, enable_stage1_broadcast_overlap, enable_stage1_allgather_overlap."
f"Found unknown sharding mode config {x}, "
f"accpet config is enable_stage1_tensor_fusion, enable_stage1_overlap, enable_stage2_overlap, split_param, disable_stage1_reduce_avg, enable_stage1_broadcast_overlap, enable_stage1_allgather_overlap, enable_release_grads, enable_fuse_optimizer_states."
)
if "disable_stage1_reduce_avg" in sharding_parallel_config:
assert self.sharding == [
Expand All @@ -1316,6 +1318,9 @@
if "enable_release_grads" in sharding_parallel_config:
strategy.hybrid_configs["sharding_configs"].release_gradients = True

if "enable_fuse_optimizer_states" in sharding_parallel_config:
strategy.hybrid_configs["sharding_configs"].enable_fuse_optimizer_states = True

Check warning on line 1322 in paddlenlp/trainer/training_args.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/training_args.py#L1321-L1322

Added lines #L1321 - L1322 were not covered by tests

if self.pipeline_parallel_degree == 1:
strategy.hybrid_configs["sharding_configs"].tensor_fusion = (
True if "enable_stage1_tensor_fusion" in sharding_parallel_config else False
Expand Down