Skip to content

ExecuTorch runner of Qnn backend can't run the pte model by following the tutorial. #8762

@TheBetterSolution

Description

@TheBetterSolution

🐛 Describe the bug

I follow the tutorial to build the runner of Qnn backend and run it:
https://pytorch.org/executorch/main/build-run-qualcomm-ai-engine-direct-backend.html

But that the run model step:

adb shell "cd ${DEVICE_DIR} \
           && export LD_LIBRARY_PATH=${DEVICE_DIR} \
           && export ADSP_LIBRARY_PATH=${DEVICE_DIR} \
           && ./qnn_executor_runner --model_path ./dlv3_qnn.pte"

I found the error & warning message:

I 00:00:00.000852 executorch:qnn_executor_runner.cpp:160] Model file ./dl3_qnn_q8.pte is loaded.
I 00:00:00.000883 executorch:qnn_executor_runner.cpp:170] Using method forward
I 00:00:00.000896 executorch:qnn_executor_runner.cpp:217] Setting up planned buffer 0, size 9031680.
[INFO] [Qnn ExecuTorch]: Deserializing processed data using QnnContextCustomProtocol
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]:  <W> Initializing HtpProvider

[ERROR] [Qnn ExecuTorch]:  <E> Stub lib id mismatch: expected (v2.28.2.241116104011_103376), detected (v2.25.17.241017130936_18858)

[ERROR] [Qnn ExecuTorch]:  <E> Unable to load Remote symbols 1008

[ERROR] [Qnn ExecuTorch]:  <E> Unable to load Remote symbols 1008

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[INFO] [Qnn ExecuTorch]: QnnContextCustomProtocol expected magic number: 0x5678abcd but get: 0x2000000

I ensure I only install (v2.28.2.241116104011_103376) HTP SDK, do you know why detected stub is v2.25.17.241017130936_18858?
[ERROR] [Qnn ExecuTorch]: <E> Stub lib id mismatch: expected (v2.28.2.241116104011_103376), detected (v2.25.17.241017130936_18858)

And please note the message:
QnnContextCustomProtocol expected magic number: 0x5678abcd but get: 0x2000000

Thanks.

Versions

Versions
Collecting environment information...
PyTorch version: 2.6.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35

Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
Nvidia driver version: 560.94
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+791472d
[pip3] numpy==2.0.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.6.0+cpu
[pip3] torchao==0.8.0+git11333ba2
[pip3] torchaudio==2.6.0+cpu
[pip3] torchsr==1.0.4
[pip3] torchtune==0.6.0.dev20250131+cu124
[pip3] torchvision==0.21.0+cpu
[conda] Could not collect

cc @cccclai @winskuo-quic @shewu-quic @cbilgin @mergennachin @byjlw

Metadata

Metadata

Assignees

Labels

module: qnnIssues related to Qualcomm's QNN delegate and code under backends/qualcomm/module: user experienceIssues related to reducing friction for userspartner: qualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, QualcommtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

To triage

Status

Backlog

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions