Trellis quants: faster CPU prompt processing#482
Merged
Conversation
added 5 commits
May 31, 2025 11:08
For iq4_kt this results in a massive PP improvement from PP512 = ~42 t/s to PP512 = 128 t/s.
iq2_kt: from PP512 = 57.3 t/s to PP512 = 135.0 t/s iq3_kt: from PP512 = 43.8 t/s to PP512 = 131.4 t/s
iq2_kt: PP512 = 79 t/s from 42 t/s iq3_kt: PP512 = 81 t/s from 35 t/s Also, found the reason why the f16 implementation for iq4_kt was not working: it overflows. It works after mltiplying with the row scale before doing the multiply-adds.
iq4_kt: PP512 = 86 t/s from 29 t/s
Closed
Nexesenex
pushed a commit
to Nexesenex/ik_llama.cpp.nxs
that referenced
this pull request
Jun 2, 2025
* iq2_kt: Metal dequantize * iq2_kt: Metal GEMV Performance is actually quite decent: 52 t/s on my M2-Max for LlaMA-3.1-8B * iq3_kt: Metal dequantize * iq3_kt: Metal GEMV Performance is not as good as iq2_kt: 40 t/s on my M2-Max for LlaMA-3.1-8B. Flipping signs is a costly affair. * iq4_kt: Metal dequantize - getting NaNs * iq4_kt: Metal GEMV - also not working * iq4_kt: Metal still not working * Disable iq4_kt on Metal for now --------- Trellis quants: faster CPU prompt processing (ikawrakow#482) * Experimenting with dequant + f32 GEMM For iq4_kt this results in a massive PP improvement from PP512 = ~42 t/s to PP512 = 128 t/s. * Experimenting with dequant + f32 GEMM iq2_kt: from PP512 = 57.3 t/s to PP512 = 135.0 t/s iq3_kt: from PP512 = 43.8 t/s to PP512 = 131.4 t/s * Experimenting with dequant + f16 GEMM on NEON iq2_kt: PP512 = 79 t/s from 42 t/s iq3_kt: PP512 = 81 t/s from 35 t/s Also, found the reason why the f16 implementation for iq4_kt was not working: it overflows. It works after mltiplying with the row scale before doing the multiply-adds. * Experimenting with dequant + f16 GEMM on NEON iq4_kt: PP512 = 86 t/s from 29 t/s * Minor --------- Minor (~2%) iq2_ks TG performance improvement on CUDA (ikawrakow#468) Direct conversion from fp16 to Q6_0
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The trellis quants
IQ2_KT, IQ3_KT, IQ4_KTare very slow on the CPU. On the main branch using BLAS results in a better prompt processing performance. But BLAS is slower for basically all other data types, so that's not a good idea.This PR improves prompt processing speed of the trellis quants by adding "dequantizing GEMM". Basically, blocks of trelis quantized weights are converted to
fp32(AVX2 )orfp16(ARM) on-the-fly, and then thefp32/fp16GEMM kernels are used to multiply the bock with the entire right matrix. This amortizes the very high dequantization cost much better than the standard kernel templates that allow up to 8 right matrix columns.On my
Zen4/AVX2CPUs this results in a better PP performance than using BLAS (or Intel MKL). On the M2-Max PP performance is about 80% of BLAS (which tells me that myARM_NEONGEMM kernel forfp16is not optimal).TG performance is not affected by the PR and is still very low.
Here is a PP-512 performance comparison between the main branch (without BLAS) and this PR for LlaMA-3.1-8B on a Ryzen-7950X CPU