Skip to content

Added cuda availability guard#18480

Merged
b8zhong merged 3 commits intosgl-project:mainfrom
mattteochen:fix-cpu-import-moe
Feb 13, 2026
Merged

Added cuda availability guard#18480
b8zhong merged 3 commits intosgl-project:mainfrom
mattteochen:fix-cpu-import-moe

Conversation

@mattteochen
Copy link
Contributor

@mattteochen mattteochen commented Feb 9, 2026

Motivation

sgl_kernel function imported without checking CUDA availability breaks CPU runs.
Raised from: https://sgl-fru7574.slack.com/archives/C07PEP77X6F/p1770521834654589

Modifications

Added torch.cuda.is_available() guard to the import.

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mattteochen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of the system by preventing import errors on environments without CUDA. Previously, an unconditional import of a CUDA-dependent kernel would cause failures on CPU-only setups. The change introduces a conditional check, ensuring that the GPU-specific module is only loaded when the necessary hardware is detected, thereby improving overall compatibility and stability across different execution environments.

Highlights

  • Conditional Import: Introduced a torch.cuda.is_available() check to conditionally import sgl_kernel functions, preventing errors on CPU-only systems.
  • CPU Compatibility: Ensures that the sgl_kernel module, which relies on CUDA, is only imported when a CUDA-enabled GPU is present, resolving issues for users running on CPU.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/moe/moe_runner/deep_gemm.py
    • Added and torch.cuda.is_available() to the if condition for importing silu_and_mul from sgl_kernel.
Activity
  • The pull request was opened by matttheochen to address an issue reported on Slack regarding sgl_kernel breaking CPU runs.
  • The author has provided a motivation and modifications, along with a standard checklist for contributions.
  • No further review comments or CI activity is indicated in the provided context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly adds a torch.cuda.is_available() guard to prevent importing a CUDA-specific kernel on CPU-only environments, which resolves a potential crash. The change is effective and addresses the issue. I've included one suggestion to refactor the platform check for better code consistency with the rest of the project.

@b8zhong
Copy link
Collaborator

b8zhong commented Feb 9, 2026

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Feb 9, 2026
@b8zhong b8zhong enabled auto-merge (squash) February 10, 2026 13:54
@mattteochen
Copy link
Contributor Author

@b8zhong could you trigger the failed tests again? They seem unrelated to the changes.

@b8zhong b8zhong merged commit 98ad284 into sgl-project:main Feb 13, 2026
262 of 275 checks passed
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants