### What this PR does / why we need it?
`mxfp_compat` only provides dtype/symbol compatibility helpers for
different `torch_npu` versions, but it was placed under
`vllm_ascend.quantization`. Importing it from device/ops paths could
trigger `quantization/__init__.py` and pull in heavy quantization method
dependencies, increasing startup coupling and causing import-cycle risk
(especially on 310P paths).
### Does this PR introduce _any_ user-facing change?
No functional behavior change intended.
### How was this patch tested?
CI passed.
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
### What this PR does / why we need it?
Add muls_add triton kernel with related fusion pass. What's more, this
PR refactors `AscendCompilationConfig` and delete `NpugraphExConfig`.
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?
CI passed with new added test.
- vLLM version: v0.13.0
- vLLM main:
45c1ca1ca1
---------
Signed-off-by: whx-sjtu <2952154980@qq.com>
### What this PR does / why we need it?
PR #5632 introduced a bug by replacing some branches gated by enable_sp
with enable_flash_comm_v1. As a result, when enable_shared_expert_dp is
enabled alone (i.e., VLLM_ASCEND_ENABLE_FLASHCOMM1=0 and
VLLM_ASCEND_ENABLE_FLASHCOMM=0), the behavior becomes inconsistent with
the previous logic and leads to accuracy issues. This PR restores the
original enable_sp-based branching to recover expected behavior and
accuracy.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
#### 1. start server
``` bash
vllm serve /home/weights/DeepSeek-V2-Lite-W8A8/ \
--port 8001 \
--served-model-name auto \
--max-model-len 1024 \
--enforce-eager \
--tensor-parallel-size 2 \
--data-parallel-size 2 \
--gpu-memory-utilization 0.9 \
--enable-expert-parallel \
--additional-config '{"enable_shared_expert_dp": true}'
```
#### 2. curl
```bash
curl -s http://localhost:8001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [
{"role": "user", "content": "Hello. I have a question. Who are you?"}
],
"max_tokens": 10,
"temperature": 0.0,
"ignore_eos_token": true
}'
```
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2
Signed-off-by: realliujiaxu <realliujiaxu@163.com>
### What this PR does / why we need it?
Currently, the performance of multi-modal encoding (i.e.,
`AscendMMEncoderAttention` forward) is considerably bounded by the heavy
host pre-process operations.
We can see from the profiling results below, before the real computation
of Attention, there are long free time in the device, which will lead to
extremely low NPU utilization.
<img width="2264" height="1398" alt="iShot_2026-01-23_16 26 39"
src="https://github.com/user-attachments/assets/37f21d06-e526-4f28-82fe-005746cf13bd"
/>
---
**To opitimize this, this PR has proposed four changes:**
1. Use `seq_lens` CPU cache to avoid frequent d2h copy. Before this PR,
`AscendMMEncoderAttention` will copy the `cu_seqlens` from NPU to CPU in
every forward, since the op `_npu_flash_attention_unpad()` requires CPU
`cu_seqlens` (otherwise it will crash). Thus, we use
`seq_lens_cpu_cache` to cache this tensor, since it's shared between all
layers, but may change in different forward step. When the current
`layer_index` is `0`, we update the cache, otherwise we directly use the
cache to avoid frequent `diff` and `copy` operations, which are costful.
2. Pre-compute the scale value to avoid calculating it in every forward.
3. Move the judgment of `enable_pad` from forward to the `__init__`
method.
4. Revert https://github.com/vllm-project/vllm-ascend/pull/6204.
**Performance after these optimizations:**
- **TTFT** has been reduced by **7.43%** ⬇️.
- **Throughput** has been increased by **1.23%** ⬆️.
---
> [!NOTE]
> This PR requires https://github.com/vllm-project/vllm/pull/33674 be
merged.
---
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Launch the server:
```bash
vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--dtype bfloat16 \
--limit-mm-per-prompt '{"image": 1}' \
--max-model-len 16384 \
--max-num-batched-tokens 16384 \
--no-async-scheduling
```
Run benchmark:
```bash
vllm bench serve \
--model /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--backend openai-chat \
--endpoint /v1/chat/completions \
--dataset-name hf \
--hf-split train \
--dataset-path lmarena-ai/vision-arena-bench-v0.1 \
--num-prompts 500 \
--request-rate 10 \
--burstiness 5 \
--no-stream
```
Before this PR:
```
============ Serving Benchmark Result ============
Successful requests: 500
Failed requests: 0
Request rate configured (RPS): 10.00
Benchmark duration (s): 82.23
Total input tokens: 33418
Total generated tokens: 61543
Request throughput (req/s): 6.08
Output token throughput (tok/s): 748.45
Peak output token throughput (tok/s): 3203.00
Peak concurrent requests: 402.00
Total token throughput (tok/s): 1154.86
---------------Time to First Token----------------
Mean TTFT (ms): 10275.37
Median TTFT (ms): 6297.88
P99 TTFT (ms): 22918.26
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 263.02
Median TPOT (ms): 277.61
P99 TPOT (ms): 483.56
---------------Inter-token Latency----------------
Mean ITL (ms): 257.31
Median ITL (ms): 94.83
P99 ITL (ms): 1773.90
==================================================
```
After this PR:
```
============ Serving Benchmark Result ============
Successful requests: 500
Failed requests: 0
Request rate configured (RPS): 10.00
Benchmark duration (s): 81.20
Total input tokens: 33418
Total generated tokens: 61509
Request throughput (req/s): 6.16
Output token throughput (tok/s): 757.54
Peak output token throughput (tok/s): 2562.00
Peak concurrent requests: 395.00
Total token throughput (tok/s): 1169.11
---------------Time to First Token----------------
Mean TTFT (ms): 9511.91
Median TTFT (ms): 5479.78
P99 TTFT (ms): 21427.21
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 261.12
Median TPOT (ms): 276.03
P99 TPOT (ms): 446.99
---------------Inter-token Latency----------------
Mean ITL (ms): 254.04
Median ITL (ms): 97.71
P99 ITL (ms): 1516.67
==================================================
```
- vLLM version: v0.15.0
- vLLM main:
dc917cceb8
Signed-off-by: shen-shanshan <467638484@qq.com>
### What this PR does / why we need it?
1. Fix a vec error caused by unaligned UB accesss in the
DispatchFFNCombine;
2. Fix expert_token_nums tensor defined on host instead of NPU in
moe_comm_method.py
3. Fix multi-core copy issue of expert_token_nums in dispatchffnCombine
op (single aiv copy is sufficient)
### Does this PR introduce _any_ user-facing change?
No, this PR does not introduce any user-facing changes. The fix only
addresses internal memory access logic and does not modify any public
APIs, interfaces, or user-visible behaviors.
### How was this patch tested?
`export VLLM_ASCEND_ENABLE_FUSED_MC2=1`
vLLM version: v0.15.0
- vLLM version: v0.15.0
- vLLM main:
9562912cea
Signed-off-by: xulei_ict <xulei292@huawei.com>
Co-authored-by: xulei_ict <xulei292@huawei.com>
### What this PR does / why we need it?
This PR extends original `rope_triton_forward` and
`split_qkv_rmsnorm_rope` to support `cos_sin_cache` && `positions` as
inputs. This fully aligns to vLLM RoPE api interface. Compared with
earlier implementation for RoPE, the benefits are:
1. avoiding pre-computation of `cos` `sin` before model execution, which
helps to remove redundant codes.
2. allowing eagle3 draft model to have different rope parameters with
main model (see #6612 ). This help to recover accept rate && accuracy in
that case.
In addition, this kernel change only introduces very small performance
degradation. Those `index_select` or `chunk` operations are now changed
into simple memory access in triton kernel (For example,
https://github.com/vllm-project/vllm-ascend/pull/5450/changes#diff-a4c2d3071530df193b98f9bf38553874bc4d47571336711f116c26d019cfbb6aR77-R81).
**Highlights**
- **RoPE Cache Unification**: Replaced separate _sin and _cos global
tensors with a unified cos_sin_cache and explicit positions tensor for
Rotary Positional Embeddings (RoPE), streamlining data handling.
- **Triton Kernel Integration**: Updated Triton kernels
(split_qkv_rmsnorm_rope_kernel, _triton_rope) to directly consume the
cos_sin_cache and positions for more efficient and integrated RoPE
calculations.
- **Custom Operation Registration**: Registered `rope_forward_oot` as a
new custom operation, allowing its use in fused compilation passes and
providing a dedicated entry point for the new RoPE implementation.
- **Refactored RoPE Forward Pass**: Modified the rope_forward_oot
function to accept the new cos_sin_cache and positions arguments,
enabling a more flexible and integrated RoPE application within the
system.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
5326c89803
Additional test on Qwen3-235b accuracy:
| Aime2024 | GSM8K | Livecodebench |
| -------- | -------- | -------- |
| 83.33 | 96.26 | 70.23 |
---------
Signed-off-by: Angazenn <supperccell@163.com>
### What this PR does / why we need it?
To prevent confusion between different QuantType classes, we remove**
QuantType in prepare_finalize.py
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
### What this PR does / why we need it?
This PR fixes an `AttributeError: 'Parameter' object has no attribute
'data'` that occurs when MLAPO is enabled with vLLM v0.15.0.
The error is caused by a monkey-patch on
`MLAAttention.process_weights_after_loading` which is incompatible with
changes in vLLM v0.15.0. This is likely related to PyTorch's deprecation
of the `.data` attribute on `torch.nn.Parameter` objects.
This change makes the monkey-patch conditional, so it is not applied for
vLLM v0.15.0 and newer versions, resolving the crash.
- vLLM version: v0.15.0
- vLLM main:
d7e17aaacd
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
### What this PR does / why we need it?
Issue: If a model such as Ling-1T adopts partial rotary position
embedding (partial RoPE), but config.json uses the rotary_dim parameter
instead of partial_rotary_factor, it will trigger a RuntimeError: The
expanded size of the tensor (128) must match the existing size (64) at
non-singleton dimension 3.
<img width="1681" height="472" alt="image"
src="https://github.com/user-attachments/assets/ba03d7df-ecba-4d6f-9ec1-4dc55f59799e"
/>
This PR addresses an issue where models using partial rotary position
embedding (partial RoPE) with the `rotary_dim` parameter in
`config.json` (instead of `partial_rotary_factor`) would encounter a
`RuntimeError`.
This change adds support for the `rotary_dim` parameter in
`vllm_ascend/ops/rotary_embedding.py` to correctly calculate the
`rope_dim`, resolving the tensor size mismatch error.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
The patch was tested successfully with the Ling-1T model, which
previously triggered the error.
- vLLM version: v0.15.0
- vLLM main:
d7e17aaacd
Signed-off-by: GoCHug <93277779+GoCHug@users.noreply.github.com>
### What this PR does / why we need it?
This PR removes the custom `rotary_embedding` operator and its
associated C++ kernel implementation, PyTorch bindings, and tests.
The codebase now falls back to using the native
`torch_npu._npu_rotary_embedding` implementation. This change simplifies
the codebase by removing custom, platform-specific kernel code and
relying on the standard NPU library implementation, which is presumably
more optimized and easier to maintain.
### Does this PR introduce _any_ user-facing change?
No. This is an internal refactoring and does not introduce any
user-facing changes.
### How was this patch tested?
The tests for the custom `rotary_embedding` operator have been removed
along with the operator itself. The correctness of the fallback to the
native `torch_npu` implementation is verified by existing CI tests for
attention layers and models that use rotary embeddings.
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
This pull request integrates comprehensive support for Mixture of
Experts (MoE) models on the Ascend 310P device within the vllm-ascend
framework. It achieves this by introducing specialized modules for
expert selection, fused MoE layers, and optimized all-gather
communication. The changes also refine existing NPU operations, making
them more consistent and efficient for 310P, ultimately enhancing the
performance and compatibility of MoE models on this hardware.
Highlights
310P MoE Support: Introduces dedicated implementations for Mixture of
Experts (MoE) models on Ascend 310P devices, including new modules for
expert selection, fused MoE layers, and communication.
All-Gather Communication: Enforces the use of ALLGATHER communication
for MoE operations on 310P, optimizing data transfer and leveraging
NPU-specific token dispatching.
Simplified NPU Operations: Removes conditional type casting for
npu_swiglu and enables custom rotary embedding kernels unconditionally,
suggesting improved native support for 310P.
New MoE Classes Registered: Registers AscendFusedMoE310 and
AscendSharedFusedMoE310 to integrate 310P-specific MoE layers into the
system's custom operation registry.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
offline test and server test, with qwen3-30b-a3b,tp/ep 4 on 310p
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0
---------
Signed-off-by: pu-zhe <zpuaa@outlook.com>
### What this PR does / why we need it?
**Optimization:** Replaces fixed block sizes (128x128x128) in
`linear_persistent_kernel` with adaptive selection logic that considers:
- Matrix dimensions (M, N, K)
- Device NPU vector core count
- Data type (float32 vs others)
**Why:** Fixed block sizes lead to suboptimal hardware utilization
across different matrix shapes. Adaptive sizing maximizes occupancy and
memory efficiency for varied workload patterns, improving throughput for
batch-invariant linear operations in LLM inference.
**Details:**
- Small matrices (M < 256): Size-proportional allocation
- Medium matrices (256 ≤ M < 1024): Balanced distribution based on grid
capacity
- Large matrices (M ≥ 1024): Optimized for dominant dimension
### Does this PR introduce _any_ user-facing change?
No. This is a performance optimization. The API and numerical results
remain unchanged; only kernel execution efficiency improves.
### How was this patch tested?
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0
Signed-off-by: DDCHY <843049740@qq.com>
Signed-off-by: zjchenn <zjchenn@gmail.com>
Co-authored-by: DDCHY <843049740@qq.com>
### What this PR does / why we need it?
Refactor MLP weight prefetch to consistency with MoE Model's prefetching
in terms of code and usage.
Environments VLLM_ASCEND_ENABLE_PREFETCH_MLP,
VLLM_ASCEND_MLP_DOWN_PREFETCH_SIZE and
VLLM_ASCEND_MLP_GATE_UP_PREFETCH_SIZE is removed, usage as following:
--additional-config '{"weight_prefetch_config": { "enabled": true,
"prefetch_ratio": {"mlp": { "gate_up": 1.0, "down": 1.0} }}}'
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
---------
Signed-off-by: leo-pony <nengjunma@outlook.com>
### What this PR does / why we need it?
Add New Output for Expert Token Count
An additional output tensor expert_token_nums is added to both operators
to meet the requirement of tracking token distribution among experts:
Tensor Name: expert_token_nums
Dimension: 1D tensor
Shape: (local_expert_num,)
Data Type: int32
Semantics: Represents the number of tokens actually received by each
expert on the current card.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
---------
Signed-off-by: guanguan0308 <1546542263@qq.com>
Signed-off-by: guanguan0308 <162653673+guanguan0308@users.noreply.github.com>
### What this PR does / why we need it?
Improve Triton stability on Ascend for large grids
set `TRITON_ALL_BLOCKS_PARALLEL=1` when grids > 65535
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
### What this PR does / why we need it?
This PR upgrades the vLLM dependency from `v0.14.1` to `v0.15.0`. This
involves:
- Updating the `VLLM_TAG` in all `Dockerfile`.
- Updating the vLLM version in `docs/source/conf.py`.
- Removing conditional code paths specific to `v0.14.1` across the
codebase, which simplifies maintenance.
- Fix `TypeError: MMEncoderAttention.__init__() got an unexpected
keyword argument 'multimodal_config'` due to
https://github.com/vllm-project/vllm/pull/31972.
- Fix `_shared_experts: 'NoneType' object is not callable` due to
https://github.com/vllm-project/vllm/pull/32082 by
https://github.com/vllm-project/vllm-ascend/pull/6335.
- Fix `ReshapeAndCacheOperation setup failed!` due to
https://github.com/vllm-project/vllm/pull/25954 by overriding attention
metadata slots.
This upgrade is necessary to keep the project aligned with the latest
features, bug fixes, and API changes in the vLLM project.
### Does this PR introduce _any_ user-facing change?
No, this is an internal dependency update and does not introduce any
user-facing changes.
### How was this patch tested?
CI is expected to pass with these changes, ensuring that all existing
tests are successful with the new vLLM version.
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
co-authored-by: shen-shanshan <467638484@qq.com>
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
### What this PR does / why we need it?
change self.head_size to self.rotary_dim.
only the rotary part is processed here, the dimension should be rotary_dim.
Fix bug #6060
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Only a small section of code was modified to adjust the parameters, and
all standard tests were passed.
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
Signed-off-by: fengshi666 <fengshi666@adsl-99-12-210-25.dsl.hstntx.sbcglobal.net>
Co-authored-by: fengshi666 <fengshi666@adsl-99-12-210-25.dsl.hstntx.sbcglobal.net>
### What this PR does / why we need it?
PR https://github.com/vllm-project/vllm/pull/32082 in vLLM makes
Qwen3-Moe models also go into `SharedFusedMoE`, while current
implementation of our `AscendSharedFusedMoE` assumes shared_experts
always exist. This PR adds checking to
`multistream_overlap_shared_expert` and `multistream_overlap_gate` in
order to only enable these features when shared experts exist.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
All ci passed
- vLLM version: v0.14.1
- vLLM main:
dc917cceb8
Signed-off-by: whx-sjtu <2952154980@qq.com>
This PR fixes the numerical error in moe_load accumulation under ACL
graph mode on NPU: using += for NPU tensors in graph mode does not throw
errors but leads to incorrect values, so we replace it with the in-place
add_() method to ensure accurate calculation.
Signed-off-by: Mercykid-bash <ruanche0218@gmail.com>
### What this PR does / why we need it?
Currently, we pad the last dim of qkv to 128 before flash attention (in
`AscendMMEncoderAttention`) to get better performance on Ascend NPU.
However, the qkv padding is executed serially, which may lead to more
overhead when launching `aclnnConstantPadNd` (launch 3 times).
Since the three operations are mutually independent, we stack qkv first
and then pad them in one kernel launch. With this optimization, **TTFT**
has been reduced by **3.15%**, **peak throughput** has been increased by
**4.20%**.
---
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Launch the server:
```bash
vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--dtype bfloat16 \
--limit-mm-per-prompt '{"image": 1}' \
--max-model-len 16384 \
--max-num-batched-tokens 16384
```
Run benchmark:
```bash
vllm bench serve \
--model /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--backend openai-chat \
--endpoint /v1/chat/completions \
--dataset-name hf \
--hf-split train \
--dataset-path lmarena-ai/vision-arena-bench-v0.1 \
--num-prompts 1000 \
--no-stream
```
Before this PR:
```
============ Serving Benchmark Result ============
Successful requests: 1000
Failed requests: 0
Benchmark duration (s): 122.33
Total input tokens: 66638
Total generated tokens: 122845
Request throughput (req/s): 8.17
Output token throughput (tok/s): 1004.18
Peak output token throughput (tok/s): 3073.00
Peak concurrent requests: 1000.00
Total token throughput (tok/s): 1548.90
---------------Time to First Token----------------
Mean TTFT (ms): 51757.16
Median TTFT (ms): 44853.42
P99 TTFT (ms): 110700.14
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 226.06
Median TPOT (ms): 206.85
P99 TPOT (ms): 935.31
---------------Inter-token Latency----------------
Mean ITL (ms): 208.82
Median ITL (ms): 96.37
P99 ITL (ms): 2183.13
==================================================
```
After this PR:
```
============ Serving Benchmark Result ============
Successful requests: 1000
Failed requests: 0
Benchmark duration (s): 121.47
Total input tokens: 66638
Total generated tokens: 122860
Request throughput (req/s): 8.23
Output token throughput (tok/s): 1011.47
Peak output token throughput (tok/s): 3202.00
Peak concurrent requests: 1000.00
Total token throughput (tok/s): 1560.08
---------------Time to First Token----------------
Mean TTFT (ms): 50125.08
Median TTFT (ms): 46270.85
P99 TTFT (ms): 108107.12
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 227.11
Median TPOT (ms): 205.13
P99 TPOT (ms): 816.08
---------------Inter-token Latency----------------
Mean ITL (ms): 204.60
Median ITL (ms): 92.66
P99 ITL (ms): 2219.02
==================================================
```
- vLLM version: v0.14.0
- vLLM main:
d68209402d
Signed-off-by: shen-shanshan <467638484@qq.com>
### What this PR does / why we need it?
* Refactor the LayerNorm and activation operator classes to decouple the
310P device implementation from the main branch.
* Refactor `mm_encoder_attention` on 310P to use the
`torch_npu._npu_flash_attention_unpad` operator.
* Refactor the QKV inputs in the prefill stage of `attention_v1` on 310P
so they are no longer padded to 16× alignment.
* Refactor `model_runner` on 310P to align the KV-cache initialization
logic with the mainline implementation.
### Does this PR introduce _any_ user-facing change?
NO
### How was this patch tested?
use the e2e tests.
- vLLM version: v0.13.0
- vLLM main:
d68209402d
---------
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
### What this PR does / why we need it?
This PR is to replace addRmsNorm and Add With addRmsNormBias. This way
can lead to a more effecient result.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Full Test Pass
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
Signed-off-by: Chen_HaoWen <chenhaowen12@huawei.com>
Co-authored-by: Chen_HaoWen <chenhaowen12@huawei.com>
### What this PR does / why we need it?
Drop vLLM 0.13.0 support, upgrade to 0.14.0
- vLLM version: v0.13.0
- vLLM main:
d68209402d
---------
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
### What this PR does / why we need it?
When running the Qwen2.5-Omni-7B model on Ascend NPU, the engine fails
during the profiling/warmup stage with the following error:
`AclNN_Runtime_Error(EZ9903): rtKernelLaunchWithHandleV2 failed: 507035.
The vector core execution is abnormal.`
error log:
https://github.com/vllm-project/vllm-ascend/actions/runs/21144534911/job/60806765393#step:17:6412
This error is specifically triggered by the `triton_mrope` kernel when
handling the unique `mrope_section` configurations of the Omni model.
Other multimodal models with standard sections (e.g., [16, 24, 24]) or
standard LLMs work correctly with Triton.
Modified vllm_ascend/ops/rotary_embedding.py to add a conditional check
before calling forward_triton.
1. For standard LLMs (mrope_interleaved = True ), it continues to use
Triton for acceleration.
2. For complex configurations (like Qwen2.5-Omni mrope_interleaved =
False ), it now falls back to the native super().forward_oot() path,
which uses the stable torch_npu or PyTorch implementation.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
d68209402d
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: zzhx1 <zzh_201018@outlook.com>
### What this PR does / why we need it?
> Extracted from PR #5513
Based on the Sharded-CP feature PR:#4702;
RFC:https://github.com/vllm-project/vllm/issues/30055
### Support FULL_DECODE_ONLY Mode under PD-Mixed Scenario:
Extends DSA-CP to handle the FULL_DECODE_ONLY execution mode when
running in a prefill-decode mixed (PD-mixed) serving environment,
improving throughput and resource utilization for decode-intensive
workloads.
**In pure prefill nodes:**
- Both q_proj and o_proj are sharded across world ranks, using
**broadcast** for weights distribution.
**In PD-mixed nodes (supporting both prefill and decode):**
- q_proj is fully replicated (not sharded) to avoid communication
overhead during decoding.
- o_proj Using the original TP `RowParallelLinear` method to store
weights
**During prefill execution:**
- o_proj forwards through all_gather to collect weights, reconstructing
the complete o_proj weights on each card.
**During decode (graph replay phase):**
- Additional all_to_all (before o_proj) and reduce_scatter (after
o_proj) are introduced to enable sequence-parallel output aggregation
while maintaining correctness under SFA CP.
### benchmark:
- TTFT increased by **527%**
- TPOT increased by **180%**
<img width="1550" height="938" alt="image"
src="https://github.com/user-attachments/assets/9b7a03d8-a3db-4a99-8923-6e5bfcfecf72"
/>
### Does this PR introduce _any_ user-facing change?
None
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
---------
Signed-off-by: zzhx1 <zzh_201018@outlook.com>
Signed-off-by: zzhxx <zhangzihang23@mails.ucas.ac.cn>
Co-authored-by: clrs97 <524936896@qq.com>
### What this PR does / why we need it?
1. Implement a **high-performance Triton custom kernel** for the rotary
position embedding (RoPE) operator on **Ascend NPU** platform
2. Fix critical bugs in the Triton RoPE kernel registration and
invocation process: including incorrect fake impl function name
matching, wrong torch ops namespace for kernel call, missing self
parameter in cos/sin slice fetching, and syntax errors in function type
annotations.
3. Achieve **extreme performance optimization** for the core RoPE
operator: the single inference latency is reduced from **57.1 μs** to
**9 μs**, with **6.34x performance improvement** and **84.24% latency
reduction**.
4. The RoPE operator is a **hot path** that is executed in every
transformer layer during LLM inference, the optimization will directly
reduce the overall inference latency and improve the throughput of LLM
serving on Ascend NPU.
5. Keep full backward compatibility: the Triton kernel is enabled only
when `HAS_TRITON=True`, and automatically fall back to the original
Ascend NPU native implementation if Triton is not available, no
functional regression.
### Does this PR introduce _any_ user-facing change?
**NO**
- No changes to any public APIs, interfaces or inference behaviors of
vLLM.
- No impact on the text generation quality and correctness of the large
model.
- The optimization is transparent to end users, only the inference speed
(latency/throughput) is improved without any functional change.
### How was this patch tested?
1. **Environment Validation**: Tested on Ascend NPU platform with
vLLM-Ascend framework, Triton library installed and enabled
(`HAS_TRITON=True`).
2. **Kernel Registration Test**: Verified the Triton RoPE kernel
(`rope_forward_triton`) is successfully registered to
`torch.ops._C_ascend` namespace without any
`ValueError/NameError/SyntaxError`.
3. **Functional Correctness Test**: Run large model (GLM4/MoE) inference
on the Ascend NPU platform, the generated text content is **completely
correct** (no garbled text, no logical errors), consistent with the
original implementation.
4. **Performance Benchmark Test**: Measure the single execution latency
of the RoPE operator before/after optimization, confirm the latency is
stably reduced from 57.1 μs to 9 μs, the performance gain is valid and
stable.
5. **Fallback Mechanism Test**: Manually disable Triton
(`HAS_TRITON=False`), verify the code correctly falls back to the
original Ascend NPU native RoPE implementation, no service crash and
normal inference.
6. **Compatibility Test**: Test with different tensor shapes/sizes of
query/key, all cases work correctly with the Triton kernel, no shape
mismatch error.
- operator supply by Hexiang Wang
- vLLM version: v0.13.0
- vLLM main:
11b6af5280
---------
Signed-off-by: ZCG12345 <2097562023@qq.com>
### What this PR does / why we need it?
Update causal_conv1d_update ops for better perf.
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996
---------
Signed-off-by: SunnyLee219 <3294305115@qq.com>
### What this PR does / why we need it?
In [PR 5040](https://github.com/vllm-project/vllm-ascend/pull/5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.
The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.
More info about this operator, please refer to RFC: issue
https://github.com/vllm-project/vllm-ascend/issues/5476
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode
| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
11b6af5280
Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
### What this PR does / why we need it?
Add layernormFn triton op for qwen3Next model for better performance.
<img width="248" height="526" alt="image"
src="https://github.com/user-attachments/assets/27b47157-5df5-4db1-aa88-1dae799b2bf6"
/>
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
---------
Signed-off-by: SunnyLee219 <3294305115@qq.com>
### What this PR does / why we need it?
Set a additional config parameter to control whether the gmmswigluequant
fuseion operator is enabled; it is enabled by True. / When enabled with
a small number of GPUs, the gmmswigluquant fused operator can cause some
performance degradation.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996
#### Perf
test model: GLM 4.6(w8a8)
- single A3 node(ep16, tp16), async-scheduling, mtp, FULL_DECODE_ONLY
- bs=1, input_lens=32000, ouput_lens=1024
Without this PR: TPOT 32.22.ms
With this PR: TPOT 30.23ms
---------
Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>