Skip to content

fix lint#4375

Merged
lvhan028 merged 1 commit intoInternLM:mainfrom
windreamer:fix_lint
Feb 27, 2026
Merged

fix lint#4375
lvhan028 merged 1 commit intoInternLM:mainfrom
windreamer:fix_lint

Conversation

@windreamer
Copy link
Copy Markdown
Collaborator

No description provided.

Copilot AI review requested due to automatic review settings February 27, 2026 10:53
@windreamer windreamer requested a review from lvhan028 February 27, 2026 10:54
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses linting issues by reformatting C++ code across multiple files in the turbomind codebase. The changes are purely cosmetic, improving code alignment, breaking long lines, and ensuring consistent formatting without altering any functionality.

Changes:

  • Reformatted line breaks and alignment in parameter lists, function calls, and variable declarations
  • Aligned inline comments for better readability
  • Ensured compliance with line length limits in configuration and kernel files

Reviewed changes

Copilot reviewed 7 out of 9 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/turbomind/turbomind.cc Reformatted long line assignment to fit line length limits
src/turbomind/models/llama/moe_ffn_layer.cc Aligned function call parameters for invokeMoeGate_V2
src/turbomind/models/llama/llama_params.h Aligned inline comments for weight type variables
src/turbomind/models/llama/LlamaDecoderLayerWeight.cc Aligned variable declarations for is_moe_layer and ffn_wtype
src/turbomind/kernels/gemm/moe_utils_v2.cu Aligned function parameters, kernel parameters, and variable declarations throughout the file
src/turbomind/kernels/attention/codegen/decoding_sm80_576_bf16_u8.cu Reformatted template instantiations to single lines
src/turbomind/kernels/attention/codegen/decoding_sm80_576_bf16_u4.cu Reformatted template instantiations to single lines
src/turbomind/kernels/attention/codegen/decoding_sm80_576_bf16_bf16.cu Reformatted template instantiations with return type on separate line
src/turbomind/kernels/attention/codegen/attention_sm80_576_bf16.cu Reformatted template instantiation to single line

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@lvhan028 lvhan028 merged commit f3b074b into InternLM:main Feb 27, 2026
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants