Skip to content

[diffusion] fix: respect dist_timeout option#18386

Merged
mickqian merged 3 commits intomainfrom
diffusion-refactor
Feb 7, 2026
Merged

[diffusion] fix: respect dist_timeout option#18386
mickqian merged 3 commits intomainfrom
diffusion-refactor

Conversation

@mickqian
Copy link
Collaborator

@mickqian mickqian commented Feb 7, 2026

fixes #15792

Motivation

Respect --dist-timeout and set the default timeout to 3600 seconds

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@mickqian mickqian requested a review from yhyang201 as a code owner February 7, 2026 01:45
@github-actions github-actions bot added the diffusion SGLang Diffusion label Feb 7, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mickqian, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of distributed operations by allowing users to configure the timeout for torch.distributed processes. It integrates a new dist_timeout option across the distributed environment initialization, ensuring that the specified timeout is respected. A default timeout of one hour is now applied, which should help prevent premature disconnections in idle distributed setups, and the command-line interface documentation has been updated for clarity.

Highlights

  • Configurable Distributed Timeout: Introduced a new dist_timeout parameter to allow explicit configuration of the timeout for torch.distributed operations.
  • Default Timeout Value: Set a default dist_timeout of 3600 seconds (1 hour) in ServerArgs to mitigate 'Connection closed by peer' errors during periods of inactivity.
  • Improved Documentation: Updated the help message for the --dist-timeout CLI argument to provide clearer guidance on its purpose and when to adjust it.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/runtime/distributed/parallel_state.py
    • Added a timeout parameter to init_distributed_environment to configure the torch.distributed.init_process_group timeout.
    • Modified maybe_init_distributed_environment_and_model_parallel to accept and pass a dist_timeout parameter to the underlying distributed initialization function.
    • Updated logging statements to include the timeout value for better debugging.
  • python/sglang/multimodal_gen/runtime/managers/gpu_worker.py
    • Modified init_device_and_model to pass the self.server_args.dist_timeout to the distributed environment initialization.
  • python/sglang/multimodal_gen/runtime/server_args.py
    • Changed the default value of dist_timeout from None to 3600 seconds (1 hour).
    • Updated the help message for the --dist-timeout CLI argument to explain its purpose in preventing 'Connection closed by peer' errors and its default value.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mickqian
Copy link
Collaborator Author

mickqian commented Feb 7, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 7, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a dist_timeout option to configure the timeout for torch.distributed operations. This is a useful feature to prevent 'Connection closed by peer' errors in services that might be idle for long periods. The changes are well-implemented, propagating the new parameter from server arguments down to the distributed initialization. I have one minor suggestion regarding code style to improve maintainability.

Comment on lines +257 to +258
import datetime

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

According to PEP 8, imports should be at the top of the file. For better code style and to avoid repeated imports, it's recommended to move this import datetime statement to the top of the file with other imports. You can remove these lines and add import datetime at the top of the file.

References
  1. PEP 8 recommends that imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. This improves readability and makes it easier to track dependencies. (link)

@mickqian mickqian mentioned this pull request Feb 7, 2026
5 tasks
@mickqian mickqian merged commit 31d4cd2 into main Feb 7, 2026
63 checks passed
@mickqian mickqian deleted the diffusion-refactor branch February 7, 2026 12:56
charlesHsuGG pushed a commit to charlesHsuGG/sglang that referenced this pull request Feb 9, 2026
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
1StepForever pushed a commit to 1StepForever/sglang that referenced this pull request Feb 26, 2026
* www/pr/ks: (265 commits)
  [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483)
  Refactoring Mooncake TE as a shared distributed component (sgl-project#17810)
  [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224)
  Update author information in pyproject.toml (sgl-project#18453)
  [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440)
  Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777)
  [diffusion] chore: revise process title (sgl-project#18446)
  Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396)
  [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438)
  [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189)
  [diffusion] feat: support efficient sequence shard (sgl-project#18161)
  [CI] fix: notebook ci may not working (sgl-project#18417)
  fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394)
  [Fix] Fix backend selection after flashinfer version update (sgl-project#18364)
  [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662)
  fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370)
  [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253)
  [diffusion] fix: respect dist_timeout option (sgl-project#18386)
  [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350)
  [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Connection closed by peer

1 participant