Summary
TensorRT-RTX 1.4.0.76 succeeds on a minimal Identity ONNX model but fails on both yolo26m.onnx and yolo26m-seg.onnx during CPU-only AOT generation on an RTX 2070 Max-Q (sm_75).
The failure is an internal Myelin error:
MyelinCheckException: hfc.cpp:120: CHECK(false) failed. target_sm '110' does not have arch kind assigned.
I also reproduced the same failure class with the CUDA 12.9 SDK variant, where the internal target was 101 instead of 110.
Environment
- Ubuntu 24.04.4 LTS
-
-
-
- GPU: NVIDIA GeForce RTX 2070 with Max-Q Design
-
-
-
-
-
-
-
- Local CUDA toolkit:
13.2 (V13.2.78)
-
-
-
-
-
-
- TensorRT-RTX SDKs tested:
-
-
-
-
-
-
-
TensorRT-RTX-1.4.0.76-Linux-x86_64-cuda-12.9-Release-external.tar.gz
-
-
-
-
-
-
-
-
TensorRT-RTX-1.4.0.76-Linux-x86_64-cuda-13.2-Release-external.tar.gz
Repro Commands
Passing minimal model:
env PATH="/usr/local/cuda-13.2/bin:$PATH" \
LD_LIBRARY_PATH="/usr/local/cuda-13.2/lib64${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" \
/tmp/trtx_sdk_132/TensorRT-RTX-1.4.0.76/bin/tensorrt_rtx \
--onnx=/home/dave/Notes/TensorRT-RTX/models/trtx_identity.onnx \
--saveEngine=/tmp/trtx_identity_fp16_1x3x640x640.trt \
--cpuOnly \
--skipInference
Failing YOLO model:
env PATH="/usr/local/cuda-13.2/bin:$PATH" \
LD_LIBRARY_PATH="/usr/local/cuda-13.2/lib64${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" \
/tmp/trtx_sdk_132/TensorRT-RTX-1.4.0.76/bin/tensorrt_rtx \
--onnx=/home/dave/Notes/TensorRT-RTX/models/yolo26m.onnx \
--saveEngine=/tmp/yolo26m_fp16_1x3x640x640.trt \
--cpuOnly \
--skipInference
Failing YOLO segmentation model:
env PATH="/usr/local/cuda-13.2/bin:$PATH" \
LD_LIBRARY_PATH="/usr/local/cuda-13.2/lib64${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" \
/tmp/trtx_sdk_132/TensorRT-RTX-1.4.0.76/bin/tensorrt_rtx \
--onnx=/home/dave/Notes/TensorRT-RTX/models/yolo26m-seg.onnx \
--saveEngine=/tmp/yolo26m-seg_fp16_1x3x640x640.trt \
--cpuOnly \
--skipInference
Observed Behavior
Minimal Identity ONNX:
-
build succeeds
-
-
-
- TensorRT-RTX reports
PASSED
Both YOLO ONNX models:
-
model parsing succeeds
-
- build fails in Myelin with:
[E] Error[1]: IBuilder::buildSerializedNetworkToStream: Error Code 1: Myelin ([myelin_graph.h:1250: attachExceptionMsgToGraph] MyelinCheckException: hfc.cpp:120: CHECK(false) failed. target_sm '110' does not have arch kind assigned. In compileGraph at /_src/optimizer/myelin/codeGenerator.cpp:1783)
[I] Created engine with size: 0 MiB
[E] Assertion failure: false && "Attempting to access an empty engine!"
Earlier CUDA 12.9 SDK result for yolo26m.onnx:
Internal Error: MyelinCheckException: hfc.cpp:120: CHECK(false) failed. target_sm '101' does not have arch kind assigned.
Expected Behavior
If these models are unsupported, I would expect an explicit model-support or operator-support error.
Instead, TensorRT-RTX fails with an internal compiler assertion.
Request
Please clarify:
- Whether this is a known TensorRT-RTX 1.4 issue on Turing /
sm_75.
- Whether these YOLO ONNX exports use an unsupported graph pattern.
- Whether there is a workaround, export change, or internal flag that avoids this failure.
- Whether a future release is expected to fix it.
Attached Logs
yolo26m_trt_rtx_build.log
trtx_identity_trt_rtx_build.log
yolo26m-seg_trt_rtx_build.log
Summary
TensorRT-RTX
1.4.0.76succeeds on a minimal Identity ONNX model but fails on bothyolo26m.onnxandyolo26m-seg.onnxduring CPU-only AOT generation on an RTX 2070 Max-Q (sm_75).The failure is an internal Myelin error:
I also reproduced the same failure class with the CUDA 12.9 SDK variant, where the internal target was
101instead of110.Environment
6.17.0-22-generic7.5580.14213.2(V13.2.78)TensorRT-RTX-1.4.0.76-Linux-x86_64-cuda-12.9-Release-external.tar.gzTensorRT-RTX-1.4.0.76-Linux-x86_64-cuda-13.2-Release-external.tar.gzRepro Commands
Passing minimal model:
Failing YOLO model:
Failing YOLO segmentation model:
Observed Behavior
Minimal Identity ONNX:
build succeeds
PASSEDBoth YOLO ONNX models:
model parsing succeeds
Earlier CUDA 12.9 SDK result for
yolo26m.onnx:Expected Behavior
If these models are unsupported, I would expect an explicit model-support or operator-support error.
Instead, TensorRT-RTX fails with an internal compiler assertion.
Request
Please clarify:
sm_75.Attached Logs
yolo26m_trt_rtx_build.log
trtx_identity_trt_rtx_build.log
yolo26m-seg_trt_rtx_build.log