Skip to content

[diffusion][MUSA] fix: MUSA platform breakage caused by PR #13662#18456

Merged
Kangyan-Zhou merged 2 commits intosgl-project:mainfrom
yeahdongcn:xd/fix
Feb 14, 2026
Merged

[diffusion][MUSA] fix: MUSA platform breakage caused by PR #13662#18456
Kangyan-Zhou merged 2 commits intosgl-project:mainfrom
yeahdongcn:xd/fix

Conversation

@yeahdongcn
Copy link
Collaborator

Motivation

PR #13662 introduced a regression that broke MUSA platform functionality. This PR fixes the issue and restores MUSA support.

Modifications

Align the MUSA implementation with the existing CUDA platform behavior.

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yeahdongcn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request provides a critical fix for the MUSA platform, which experienced a regression due to a previous change. The primary goal is to re-establish MUSA functionality by ensuring its implementation correctly handles device identification and memory queries, particularly in distributed settings, thereby aligning its behavior with existing CUDA platform standards.

Highlights

  • MUSA Platform Fix: This pull request addresses and resolves a regression introduced by PR [NPU][diffusion] model: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend #13662, which caused breakage in the MUSA platform functionality. The changes aim to restore full MUSA support.
  • Distributed Device Identification: A new class method, get_local_torch_device, has been added to MusaPlatformBase to correctly identify the local MUSA device using the LOCAL_RANK environment variable, aligning with distributed training practices.
  • Distributed GPU Memory Query Correction: The get_available_gpu_memory function has been updated to correctly determine the device_id in distributed environments by utilizing torch.distributed.get_rank() when torch.distributed is initialized, ensuring accurate memory reporting for MUSA devices.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/runtime/platforms/musa.py
    • Imported the envs module to access environment variables.
    • Added the get_local_torch_device class method to MusaPlatformBase to return a torch.device object for the local MUSA device based on envs.LOCAL_RANK.
    • Modified the get_available_gpu_memory function to set device_id to torch.distributed.get_rank() if torch.distributed is initialized, ensuring correct device targeting in distributed setups.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@yeahdongcn
Copy link
Collaborator Author

@Makcum888e Please also be aware of this.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a regression on the MUSA platform by aligning its implementation with the existing CUDA platform. The changes introduce get_local_torch_device and modify get_available_gpu_memory to correctly handle device IDs in a distributed context. The changes are logical and consistent with the goal of aligning with the CUDA implementation. I have provided one suggestion to improve the robustness of device selection in multi-node distributed environments.

@mickqian
Copy link
Collaborator

mickqian commented Feb 9, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 9, 2026
@Makcum888e
Copy link
Contributor

Please also be aware of this.

please add CI tests for your platform to prevent similar problems in the future

@yeahdongcn
Copy link
Collaborator Author

Please also be aware of this.

please add CI tests for your platform to prevent similar problems in the future

Yes, @johnnycxm is working on that.

@yeahdongcn
Copy link
Collaborator Author

It appears that all the failing AMD CI jobs are related to the human-eval installation. Given that, could @mickqian @Kangyan-Zhou please proceed with merging this PR? Thanks!

@mickqian
Copy link
Collaborator

mickqian commented Feb 9, 2026

/rerun-failed-ci

…t#13662

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
@Kangyan-Zhou Kangyan-Zhou merged commit 45a4697 into sgl-project:main Feb 14, 2026
92 of 94 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants