[feat] parameterize hardcoded MLA dimensions to support GLM5-W8A8 (#6902)
Derive MLA dimension constants (q_lora_rank, qk_nope_head_dim, etc.)
from tensor shapes at runtime instead of hardcoding DeepSeek V3 values.
This enables the mla_preprocess fused op to work with both DeepSeek V3
and GLM5 models without Python API changes.
- Add 9 dimension fields to MlaTilingData with DeepSeek V3 defaults
- Add OpParam fields and dynamize all host-side tiling functions
- Derive dimensions from wuk, gamma1, kv_cache_rope tensor shapes
- Replace 310+ hardcoded constants across 4 kernel .hpp files
- Remove unused MMSIZE1/MMSIZE2 constants
### What this PR does / why we need it?
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2
---------
Signed-off-by: liuchenbing <chenliumail@163.com>
Co-authored-by: liuchenbing <chenliumail@163.com>
This commit is contained in:
@@ -60,9 +60,6 @@ constexpr uint32_t SPLIT_RMSNRORM_SIZE_TWO = 64;
|
||||
constexpr uint32_t ROPE_SPLIT_SIZE_ONE = 64;
|
||||
constexpr uint32_t ROPE_SPLIT_SIZE_TWO = 128;
|
||||
|
||||
constexpr uint32_t MMSIZE1 = 128 * 192; // 24576
|
||||
constexpr uint32_t MMSIZE2 = 64 * 128; // 8192
|
||||
|
||||
constexpr uint64_t L0_PINGPONG_BUFFER_LEN = 32768; // 32 KB
|
||||
constexpr uint64_t L1_PINGPONG_BUFFER_LEN = 262144; // 256 KB
|
||||
constexpr uint64_t BLOCK_SIZE_16 = 16;
|
||||
|
||||
Reference in New Issue
Block a user