Adds GEMM Profiling Guide to TE#2863
Conversation
Greptile SummaryThis PR adds a GEMM profiling guide to the Transformer Engine documentation and a companion benchmark tool (
Confidence Score: 4/5Safe to merge with one P1 fix: the verify-dgrad plot discrepancy should be resolved before the tool is used for benchmarking guidance. One P1 logic bug (plot ignores measured Dgrad with --verify-dgrad) caps the score at 4. The FP8Block omission in shape mode was already flagged in a prior thread. No security or data-corruption concerns. benchmarks/gemm/benchmark_gemm.py — specifically create_model_config_plot and its call site in run_model_config_benchmarks. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[CLI: main] --> B{has_model_config?}
B -- Yes --> C[run_model_config_benchmarks]
B -- No --> D[run_benchmarks shape/profile mode]
C --> E[compute_gemm_shapes fprop/dgrad/wgrad]
E --> F[_benchmark_single_shape per shape x precision]
F --> G{pre_quantize?}
G -- Yes --> H[benchmark_*_prequantized tex.generic_gemm]
G -- No --> I[benchmark_* te.Linear autocast]
C --> M{verify_dgrad?}
M -- Yes --> N[benchmark dgrad_shapes use measured sums]
M -- No --> O[assume Dgrad = Fprop x 2]
C --> P[print per-layer / full-model summary]
C --> Q[create_model_config_plot ALWAYS uses Fprop x 2 for Fprop+Dgrad bars]
D --> R[run BF16 / MXFP8 / NVFP4 NOTE: FP8Block omitted]
D --> S[create_plot]
style Q fill:#ffcccc
style R fill:#ffcccc
Reviews (4): Last reviewed commit: "adds blog post" | Re-trigger Greptile |
| results: dict[str, list[float]] = {"BF16": [], "MXFP8": [], "NVFP4": []} | ||
| time_results: dict[str, list[float]] = {"BF16": [], "MXFP8": [], "NVFP4": []} | ||
|
|
||
| has_blackwell = is_blackwell_available() | ||
| run_fp8 = include_fp8 and TE_AVAILABLE | ||
| run_fp4 = include_fp4 and TE_AVAILABLE and has_blackwell |
There was a problem hiding this comment.
FP8Block silently omitted in shape mode
run_benchmarks() (used for both default square-shape benchmarks and explicit --shapes invocations) never calls benchmark_fp8_block / benchmark_fp8_block_prequantized. The results dict is initialized with only "BF16", "MXFP8", and "NVFP4", and the function has no include_fp8_block parameter — so the --no-fp8-block flag parsed in main() is only forwarded to run_model_config_benchmarks (line 1579) and has no effect here.
Users who run the tool in shape mode (no model-config flags) will silently receive BF16/MXFP8/NVFP4 data only, even though the module docstring advertises "BF16, FP8 Block, MXFP8, and NVFP4 precisions."
To fix, add include_fp8_block: bool = True to run_benchmarks, initialise results["FP8Block"] = [], select fp8_block_fn the same way model-config mode does, and forward the flag from main().
| color=op_color, | ||
| alpha=0.9, | ||
| label=f"{op_label} (Fprop+Dgrad)" if i == 0 or True else "", | ||
| ) | ||
| ax.bar( | ||
| x, | ||
| wgrad_ms, | ||
| bar_width, | ||
| bottom=all_fprop_total + total_wgrad_bottom, | ||
| color=op_color, | ||
| alpha=0.5, | ||
| label=f"{op_label} (Wgrad)" if i == 0 or True else "", | ||
| ) |
There was a problem hiding this comment.
Dead condition
if i == 0 or True always evaluates to True
Both label= expressions use if i == 0 or True, which unconditionally takes the True branch. This is dead code — or True makes the condition tautological. The intent was likely either True (always label, which is fine for a stacked bar chart) or if i == 0 (label only the first series). Clean it up to express intent clearly:
| color=op_color, | |
| alpha=0.9, | |
| label=f"{op_label} (Fprop+Dgrad)" if i == 0 or True else "", | |
| ) | |
| ax.bar( | |
| x, | |
| wgrad_ms, | |
| bar_width, | |
| bottom=all_fprop_total + total_wgrad_bottom, | |
| color=op_color, | |
| alpha=0.5, | |
| label=f"{op_label} (Wgrad)" if i == 0 or True else "", | |
| ) | |
| label=f"{op_label} (Fprop+Dgrad)", |
and
| color=op_color, | |
| alpha=0.9, | |
| label=f"{op_label} (Fprop+Dgrad)" if i == 0 or True else "", | |
| ) | |
| ax.bar( | |
| x, | |
| wgrad_ms, | |
| bar_width, | |
| bottom=all_fprop_total + total_wgrad_bottom, | |
| color=op_color, | |
| alpha=0.5, | |
| label=f"{op_label} (Wgrad)" if i == 0 or True else "", | |
| ) | |
| label=f"{op_label} (Wgrad)", |
| * **profiler** -- ``torch.profiler`` (CUPTI) kernel timestamps. | ||
| Only the matched GEMM compute kernels (nvjet, xmma, cutlass, cublas) | ||
| are summed, giving a kernel-only measurement. | ||
|
|
There was a problem hiding this comment.
Docstring lists "cublas" but the pattern tuple uses "gemm" instead
The module docstring (line 19) lists the matched kernel patterns as (nvjet, xmma, cutlass, cublas), but GEMM_KERNEL_PATTERNS at line 70 is ("gemm", "nvjet", "xmma", "cutlass") — "cublas" is absent and "gemm" was added in its place. In practice "gemm" does catch cuBLAS kernels (their names contain gemm), so the behaviour is correct, but the docstring is inaccurate and may confuse users auditing kernel coverage.
| * **profiler** -- ``torch.profiler`` (CUPTI) kernel timestamps. | |
| Only the matched GEMM compute kernels (nvjet, xmma, cutlass, cublas) | |
| are summed, giving a kernel-only measurement. | |
| * **profiler** -- ``torch.profiler`` (CUPTI) kernel timestamps. | |
| Only the matched GEMM compute kernels (gemm, nvjet, xmma, cutlass) | |
| are summed, giving a kernel-only measurement. |
|
Hi @jomitchellnv, I see that this PR is open, but "Documentation" job is failing. If you fix it, please ping me and I'll review it. |
|
@pggPL they should be fixed now I hope |
|
/te-ci L1 pytorch |
Signed-off-by: Jonathan Mitchell <jomitchell@ipp1-1334.ipp1a1.colossus.nvidia.com>
| loc="upper right", | ||
| fontsize=8, | ||
| ncol=2, |
There was a problem hiding this comment.
--verify-dgrad plot silently uses approximation instead of measured values
When --verify-dgrad is passed, run_model_config_benchmarks benchmarks and records actual Dgrad timings into dgrad_results, and the printed table correctly shows those measured values. However, create_model_config_plot is never given dgrad_results — the call site only passes fprop_results and wgrad_results. Inside the plot function, Fprop+Dgrad bar height is always computed as fp.avg_time_ms * 2 (the approximation), so the chart silently contradicts the table when --verify-dgrad is used.
Fix: add dgrad_results and verify_dgrad parameters to create_model_config_plot, and when verify_dgrad=True, use fprop_ms[j] + dgrad_ms[j] instead of fprop_ms[j] * 2 for each op bar.
Description
Adds a GEMM profiling guide to the Transformer Engine documentation and a companion benchmark tool. The guide
explains how to derive all 12 per-layer GEMM shapes (Fprop, Dgrad, Wgrad) from transformer model
hyperparameters, benchmark them across precisions (BF16, FP8 Block, MXFP8, NVFP4), and interpret the resulting
speedup estimates.
The benchmark tool supports two modes: model config mode (derives shapes automatically from hidden_size,
intermediate_size, etc.) and manual shape mode (explicit MxKxN triplets). It measures both autocast performance
(realistic end-to-end with quantization overhead) and pre-quantized kernel-only throughput, using CUDA events
or torch.profiler timing backends.
Type of change
Changes
Add benchmarks/gemm/benchmark_gemm.py — standalone GEMM benchmark tool supporting BF16, FP8 Block, MXFP8, and
NVFP4 precisions with autocast and pre-quantized modes, CUDA event and torch.profiler timing, Nsight Systems
integration, and bar-chart output
Add docs/features/low_precision_training/gemm_profiling/gemm_profiling.rst — documentation covering GEMM
shape derivation from model configs, forward/backward pass shape conventions, precision mapping per GEMM pass,
speedup calculation methodology, and a worked example on B300
Add benchmark result plots (img/model_config_speedup.png, img/model_config_speedup_prequant.png)
Update docs/features/low_precision_training/index.rst toctree to include the new guide
Please list the changes introduced in this PR:
Change A
Change B
Checklist: