### What this PR does / why we need it?
PR #8618 renamed `VLLM_NIXL_ABORT_REQUEST_TIMEOUT` to
`VLLM_MOONCAKE_ABORT_REQUEST_TIMEOUT` and simultaneously reduced the
timeout value from 300000 to 480 seconds in the nightly test configs.
The 480s value is far too short for heavy multi-node workloads (DeepSeek
V3/R1 under W8A8 + EP), causing [spurious abort-request
timeouts](https://github.com/vllm-project/vllm-ascend/actions/runs/25067539406/job/73441223206)
in CI.
This PR restores the timeout value to the original 300000 to fix the
nightly test failures introduced by #8618.
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
### What this PR does / why we need it?
repair bug custom op ci `conftest.py`:Some test cases fail or are
skipped, leading to incorrect time retrieval.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
nightly
Signed-off-by: ZT-AIA <1028681969@qq.com>
### What this PR does / why we need it?
This PR fixes various documentation issues and improves code examples
throughout the project.
Signed-off-by: MrZ20 <2609716663@qq.com>
### What this PR does / why we need it?
This PR introduces a mechanism to track test duration in `conftest.py`
and skip subsequent tests in a file if a certain number of tests exceed
a timeout threshold. This is intended to prevent CI hangs or
long-running nightly tests. Additionally, it reduces the parameter space
for `test_fused_qkvzba_split_reshape_cat.py` to further optimize CI
runtime.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
nightly
Signed-off-by: ZT-AIA <1028681969@qq.com>
Cherry-pick https://github.com/vllm-project/vllm-ascend/pull/8683
### What this PR does / why we need it?
This PR relaxes the TTFT threshold from `0.4` to `0.5` to improve
robustness under Data Parallel (DP) load imbalance.
#### Background
The current assertion enforces: prefix75 < prefix0 * 0.4
#### ❌ Nightly Failure Cases (Observed)
| prefix0 | threshold (0.4x) | prefix75 | delta |
|--------|------------------|----------|--------|
| 4696.24 | 1878.50 | 1883.99 | +5.49 |
| 4696.20 | 1878.48 | 1896.01 | +17.53 |
| 4636.73 | 1854.69 | 1902.48 | +47.79 |
| 4655.17 | 1862.07 | 1913.54 | +51.47 |
| 4685.35 | 1874.14 | 1919.36 | +45.22 |
| 4660.33 | 1864.13 | 1915.41 | +51.28 |
| 4648.30 | 1859.32 | 1950.50 | +91.18 |
| 4655.30 | 1862.12 | 1962.32 | +100.20 |
---
#### ✅ Nightly Passing Cases (Observed)
| prefix0 | threshold (0.4x) | prefix75 | margin |
|--------|------------------|----------|---------|
| 4685.64 | 1874.26 | 1864.46 | -9.80 |
| 5520.28 | 2208.11 | 1928.97 | -279.14 |
| 4639.23 | 1855.69 | 1846.86 | -8.83 |
| 4651.64 | 1860.66 | 1854.30 | -6.36 |
| 4640.39 | 1856.15 | 1840.32 | -15.83 |
| 4677.20 | 1870.88 | 1848.35 | -22.53 |
---
#### Key Observations
- Failures exceed the threshold by only **~5 ms to ~100 ms (~0.3%–5%)**
- Passing cases often have **very tight margins (~5–10 ms)**
- There is clear **overlap between pass and fail boundaries**
- Many failures are **borderline violations**, not real regressions
---
#### Root Cause
The instability is caused by **Data Parallel (DP) load imbalance**,
which introduces systematic variance:
- Uneven request distribution across workers
- Queueing delays
- Increased TTFT variance (especially for `prefix75`)
---
#### Conclusion
- The current threshold (`0.4x`) is **too strict**
- Observed natural fluctuation:
- Absolute: up to ~100 ms
- Relative: up to ~5% over threshold
- Pass/fail boundary is currently **too sensitive to runtime jitter**
---
#### Change
We relax the threshold: **0.4 → 0.5**
This adjustment:
- Accounts for expected runtime variance
- Reduces false negatives
- Maintains a meaningful performance constraint
Even with `0.5`, the requirement remains strict (`prefix75 < 50% of
prefix0`) and does not mask real regressions.
---
### Does this PR introduce _any_ user-facing change?
No.
This change only affects internal test assertions and does not impact
user-facing behavior or model performance.
---
### How was this patch tested?
- Verified against existing TTFT test cases:
- Previously failing cases (due to small variance) now pass
- No regressions observed in other scenarios
- Confirmed that failures were due to DP load imbalance rather than
actual performance degradation
- Ensured the updated threshold still enforces a meaningful constraint
on TTFT
Signed-off-by: underfituu <hzhucong@163.com>
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
#### Fixed:
1. The function name in test_moe_init_routing_custom.py is incorrect; it
is not named as a test case function starting with 'test'.
2.In Night ops singlecard_ops add the printing of timestamps for use
cases, making it easier to quickly locate issues after a timeout occurs.
#### To be repaired:
1. The test_penality.py test case partially fails. It takes one hour.
The owner has been notified to fix the case after the 5.1 holiday.
——Yang Cheng
3. The csrc/copy_and_expand_eagle_inputs operator invoked by
test_copy_and_expand_eagle_inputs.py supports only 910b.——HF001
4. The test_causal_conv1d.py test case is incorrect. The triton operator
`causal_conv1d_fn` invoked by the test_causal_conv1d.py test case uses
`get_forward_context`, but the operator case does not use
`set_forward_context` (which is normal in the model). ——Zeng Tian
5. The test_causal_conv1d.py case is incorrect. In this scenario,
uboverflow occurs when the triton invoked ——Zeng Tian
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
no
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
nightly
Signed-off-by: ZT-AIA <1028681969@qq.com>
### What this PR does / why we need it?
After he completes the subsequent repairs, it can be restored. For now,
let's skip test_copy_and_expand_eagle_inputs
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
nightly
Signed-off-by: ZT-AIA <1028681969@qq.com>
### What this PR does / why we need it?
This PR introduces a caching mechanism for CPU-based `torch.Generator`
objects in the `_random_sample_310p` function to optimize sampling
performance. It includes unit tests for cache persistence and state
recovery. Feedback highlights a critical bug where keying the cache by
batch index instead of generator ID can break RNG reproducibility during
request re-scheduling, and notes a potential memory leak in the global
cache.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Tested via new unit tests in `tests/ut/_310p/sample/test_sampler_310.py`
verifying cache logic and error handling.
---------
Signed-off-by: csoulnd <daidaicurry@foxmail.com>
### What this PR does / why we need it?
Backport validate pd mode feature gates no fused mc2 v0.18.0 clean
backport #8582
---------
Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
### What this PR does / why we need it?
fix documentation error or non-standard description in releases/v0.18.0
branch
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Documentation check.
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
### What this PR does / why we need it?
This backports the forced-tool-choice `content=None` guard to the
`releases/v0.18.0` compatibility layer.
Upstream vLLM still has forced named tool-choice branches that assert
`content is not None` after reasoning extraction. Some reasoning parsers
can legally consume the full output and return `(reasoning, None)`,
which makes the assert reachable and can surface as a server-side
failure.
This PR follows the same compatibility-patch pattern used by:
- `7314bbe2` fix(platform): reimplement MiniMax usage accounting patch
(#7835)
- `f83cb0e6` [Bugfix][Platform] Fix GLM47 tool-call finish backfill
(#7710)
The patch is intentionally narrow:
- normalize `content=None` to `""` only for forced named tool choice
- patch both chat-completions and responses parser entry points
- keep the rest of upstream behavior unchanged
Upstream tracking:
- issue: vllm-project/vllm#40147
- PR: vllm-project/vllm#40148
### Does this PR introduce _any_ user-facing change?
Yes.
Forced named tool choice becomes robust when the reasoning parser
returns no post-reasoning content, avoiding an internal assertion
failure and emitting an empty-argument function call instead.
### How was this patch tested?
Unit tests:
```bash
pytest -sv tests/ut/patch/platform/test_patch_tool_choice_none_content.py \
tests/ut/patch/platform/test_patch_glm_tool_call_parser.py \
tests/ut/patch/platform/test_patch_minimax_usage_accounting.py
```
Result: 22 passed.
---------
Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
Co-authored-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
### What this PR does / why we need it?
This PR renames the environment variable VLLM_NIXL_ABORT_REQUEST_TIMEOUT
to VLLM_MOONCAKE_ABORT_REQUEST_TIMEOUT to align with the Mooncake
connector naming convention. It also updates the documentation and test
configurations to reflect this change and adjusts the suggested timeout
value in the documentation to 480 seconds for consistency.
### Does this PR introduce _any_ user-facing change?
Yes. The environment variable for configuring the abort request timeout
has been renamed. Users should update their environment settings from
VLLM_NIXL_ABORT_REQUEST_TIMEOUT to VLLM_MOONCAKE_ABORT_REQUEST_TIMEOUT.
### How was this patch tested?
The changes were verified by updating the corresponding test
configuration files and ensuring consistency across the documentation.
---------
Signed-off-by: herizhen <1270637059@qq.com>
Signed-off-by: herizhen <59841270+herizhen@users.noreply.github.com>
### What this PR does / why we need it?
This PR introduce stricter Ascend `additional_config.layer_sharding`
validation to the 0.18 release branch so it is only accepted on
PD-disaggregated P nodes with `kv_role="kv_producer"`.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
E2E test
---------
Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
Backport of #7882 to releases/v0.18.0. Adds aime2025 benchmark test for
DeepSeek-V3.2-W8A8 EP with disaggregated prefill on A3 (4-node, 16 NPUs
per node, accuracy benchmark baseline 66.67%).
Signed-off-by: guozr <guozr1997@hotmail.com>
Co-authored-by: guozr <guozr1997@hotmail.com>
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Update CI for GLM-5 configuration on vllm-ascend/releases/v0.18.0 branch
在0.18.0版本上对glm5-w4a8做测试
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: yangjiuhua <y00845194@china.huawei.com>
Co-authored-by: yangjiuhua <y00845194@china.huawei.com>
### What this PR does / why we need it?
The env `VLLM_ASCEND_ENABLE_FUSED_MC2` should only enabled in the
decoder node during Prefill-Decode Disaggregation scenario
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
What this PR does / why we need it?
switch Ascend conv3d forward_oot to use forward_native and add ut
Does this PR introduce any user-facing change?
No
How was this patch tested?
by CI
---------
Signed-off-by: zouyizhou <zouyizhou@huawei.com>
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
- Enforce recompute scheduler only in PD-disaggregated mode.
- Enforce balance scheduling only in PD-mixed mode.
- Enforce fused MC2 only on PD-disaggregated D-side (kv_consumer).
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
By ci
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
### What this PR does / why we need it?
This PR backports the DSA-CP PD role gating fix to `releases/v0.18.0`.
The existing helper logic on the release branch does not handle the PD
mixed-role case correctly when deciding whether layer sharding or TP
`o_proj` handling should be enabled. Layer sharding should only run on
the P-side instance, while TP `o_proj` handling should stay enabled for
normal non-PD deployments and for the PD mixed-role (`kv_both`)
instance. This patch makes those conditions explicit and adds unit
coverage for the allowed and disallowed combinations, including the
DSA-CP-disabled path.
Such wrong condition lead to **vllm serve failures** in case: **FC1 +
PD-colocated KV pooling + no layer_sharding**, specifically causing:
1. insufficient Available KV cache memory
2. o_proj shape error in sfa_v1 attention module
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
E2E test with dsv32 + FC1 + FULL_DECODE_ONLY +
kv_transfer_config(kv_both) + no layer_sharding
---------
Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
### What this PR does / why we need it?
This PR improves the readability of the documentation by fixing typos,
correcting command extensions, and fixing broken links in the Chinese
README.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Documentation changes only.
---------
Signed-off-by: sunshine202600 <sunshine202600@163.com>
### What this PR does / why we need it?
This PR implements the `AscendW8A8DynamicLinearMethod310` quantization
scheme specifically for 310P hardware. It includes the logic for weight
retrieval, per-channel parameter generation, and the application of
dynamic quantization using NPU-specific kernels. Additionally, it
updates `ShardedStateLoader310` to handle quantization configurations
more robustly when generating parameter type maps.
Feedback from the review identified two critical issues in the
implementation:
1. The tensor squeezing logic in the `apply` method incorrectly handles
2D inputs, which may lead to shape mismatches in subsequent layers.
2. The weight tensor in `process_weights_after_loading` is transposed
after being converted to the private NZ format; the transpose operation
should be performed on the ND tensor before conversion to ensure correct
physical layout.
cherry-pick from : #7546#7725
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
New unit tests were added in
`tests/ut/_310p/quantization/test_w8a8_dynamic_310.py` to verify the
quantization method, and
`tests/ut/_310p/test_sharded_state_loader_310p.py` was updated to test
the state loader changes.
---------
Signed-off-by: csoulnd <daidaicurry@foxmail.com>
### What this PR does / why we need it?
This PR backports the CPU binding locale normalization fix from #7274 to
`releases/v0.18.0`, including the follow-up review fixes already applied
on `main`.
The change forces `LC_ALL`, `LANG`, and `LC_MESSAGES` to `C` before
spawning subprocesses in `vllm_ascend.cpu_binding.execute_command()`, so
parser-dependent command output stays stable on localized systems. It
also handles `subprocess.TimeoutExpired` by killing the child process
before collecting output, and updates the existing unit tests to keep
command-argument coverage while adding timeout-path coverage.
Fixes#6992
### Does this PR introduce _any_ user-facing change?
Yes.
Users running CPU binding on non-English OS environments should now get
consistent English subprocess output for parser-dependent commands,
avoiding failures caused by inherited locale settings.
### How was this patch tested?
- Updated the existing unit tests in
`tests/ut/device_allocator/test_cpu_binding.py` to assert the locale
environment, retain command argument coverage, and cover the timeout
cleanup path.
- Attempted to run targeted pytest cases locally, but the pytest
invocation did not complete normally in this environment, so I could not
record a clean passing run here.
Attribution:
- Co-authored-by: stdjhs <1601599324@qq.com>
- Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: stdjhs <1601599324@qq.com>
### What this PR does / why we need it?
Fix a bug in the GLM tool call parser where the `function.name` field
was incorrectly included in the final (non-first) chunks of streaming
tool calls.
Per OpenAI streaming semantics, `id`, `type`, and `function.name` must
only appear in the **first** chunk for a given tool call index. When
`_create_remaining_args_delta` was called for continuing/finishing
chunks, it was incorrectly reading the function name from
`delta_message.tool_calls` and re-emitting it, causing clients to see a
duplicate/extra function name in the final chunk.
**Root cause**: The original code always looked up the tool call in
`delta_message.tool_calls` to get the name, id, and type — even when
this was not the first chunk being streamed. This caused the function
name to appear again in the final argument-completion chunk.
**Fix**:
- Track whether arguments have already been streamed
(`already_streamed_args`) for each tool call index.
- Only populate `fallback_tool_call_id`, `fallback_tool_call_type`, and
`fallback_tool_call_name` when `already_streamed_args` is empty (i.e.,
this is genuinely the first chunk).
- Refactored `_create_remaining_args_delta` to omit header fields
entirely when all fallback values are `None`, which is the correct
behavior for continuing/finishing chunks.
### Does this PR introduce _any_ user-facing change?
Yes. Clients consuming the streaming tool call response will no longer
receive a duplicate `function.name` in the final chunk. This fixes
incorrect behavior visible in the OpenAI-compatible streaming API output
for GLM models using tool calls.
### How was this patch tested?
- Code review and logic analysis of the streaming tool call path in
`patch_glm_tool_call_parser.py`.
- Existing unit tests in
`tests/ut/platform/test_patch_glm_tool_call_parser.py`.
---------
Signed-off-by: chen-weipeng12 <chen-weipeng12@noreply.gitcode.com>
Signed-off-by: chenweiqiang11 <chenweiqiang11@noreply.github.com>
Co-authored-by: chen-weipeng12 <chen-weipeng12@noreply.gitcode.com>
### What this PR does / why we need it?
Adds a `check_rank0_process_count` validation step to the
DeepSeek-R1-W8A8-HBM nightly single-node test.
The check verifies that after the server starts, there is **exactly 1**
`vllm serve` process running on rank0. This guards against the
regression fixed in #8041 (extra NPU context leaking on device 0),
ensuring it does not silently reappear in future releases.
#### Changes
-
**`tests/e2e/nightly/single_node/models/scripts/test_single_node.py`**:
Add `run_check_rank0_process_count` async handler. It calls `npu-smi
info` for diagnostics, then uses `psutil` to assert exactly one `vllm
serve` process exists on rank0.
-
**`tests/e2e/nightly/single_node/models/configs/DeepSeek-R1-W8A8-HBM.yaml`**:
Register `check_rank0_process_count` in the `test_content` list for the
HBM test case.
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
### What this PR does / why we need it?
Introduce a check to not using asynchronous communication under
`enable_dsa_cp_with_layer_shard` branch on capturing mode. This change
prevents potential stream and event issues when operating in
graph/capturing mode, ensuring safer communication practices.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
E2E test with dsv32 + FC1 + FULL_DECODE_ONLY +
kv_transfer_config(kv_both)
---------
Signed-off-by: chenchuw886 <chenchuw@huawei.com>
Co-authored-by: chenchuw886 <chenchuw@huawei.com>
### What this PR does / why we need it?
Fix the nightly pip binary install doc test fail.
### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
Nightly doc test
Signed-off-by: leo-pony <nengjunma@outlook.com>
### What this PR does / why we need it?
This PR fixes and simplifies the CI configuration for Qwen3 32B.
The main changes are:
- Remove the redundant `Qwen3-32B-Int8-A3-Feature-Stack3.yaml` config
and consolidate the CI setup into `Qwen3-32B-Int8.yaml`.
- Improve runtime stability by adding
`PYTORCH_NPU_ALLOC_CONF=expandable_segments:True` and setting
`--max-num-seqs 80`.
- Update the accuracy benchmark from `aime2024` to `gsm8k-lite`, and
adjust the related dataset config, output length, baseline, and
threshold accordingly.
These changes make the Qwen3 32B CI easier to maintain and more stable
in nightly validation.
---------
Signed-off-by: ZYang6263 <zy626375@gmail.com>
This pull request reverts previous changes to switch to FIA and instead
implements npu_ring_mla for MLA prefill operations(#5704 ). The change
streamlines the attention mechanism by removing unnecessary metadata
tracking and updating the underlying NPU operations to use the
ring-based MLA kernel. This adjustment ensures better compatibility and
performance for MLA prefill tasks within the vLLM Ascend backend.
Highlights
- Migration to npu_ring_mla: Replaced the usage of
npu_fused_infer_attention_score (FIA) with npu_ring_mla for MLA prefill
operations across the codebase to improve performance and alignment with
the intended architecture.
- Cleanup of redundant metadata: Removed
chunk_actual_seq_lengths_kv_list and actual_seq_lengths_q from various
metadata structures as they are no longer required for the updated
attention implementation.
- Test suite updates: Updated unit tests in test_mla_cp.py and
test_mla_v1.py to mock npu_ring_mla instead of the deprecated FIA
functions and adjusted test assertions to reflect the new implementation
details.
Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
Cherry-pick from https://github.com/vllm-project/vllm-ascend/pull/7468
- Fix TTFT ratio threshold from 0.8 to 0.4 for prefix cache benchmarks
- Fix max_out_len values for warm_up and benchmark configs
- Applied to both DeepSeek-R1-0528-W8A8 and Qwen3-32B-Int8 configs
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
Signed-off-by: underfituu <hzhucong@163.com>
backport of #7474
This PR adds C8 (INT8) KV cache quantization support for standard GQA
attention models (e.g., Qwen3-32B W8A8C8). C8 uses static per-channel
quantization scales to store KV cache in INT8, reducing KV cache memory
by ~50% compared to BF16, enabling higher batch concurrency and longer
context lengths on the same hardware.
**Key changes:**
1. **`attention_v1.py`** — New `AscendC8AttentionBackendImpl` subclass
of `AscendAttentionBackendImpl`:
- `_prepare_c8_scales`: Shards per-channel scales/offsets to the current
TP rank and pre-computes BF16 BNSD-shaped antiquant tensors (one-time
per layer).
- `_quantize_kv_to_int8`: Quantizes BF16 K/V to INT8 before
`reshape_and_cache`, using pre-cached inverse scales.
- `_forward_c8_decode`: FIA V1 BNSD paged attention with native INT8 KV
and `perchannel` antiquant mode.
- `_forward_c8_chunked_prefill`: Splits decode (FIA V1 BNSD paged INT8)
and prefill (FIA V1 TND float) into two kernel calls.
- `_forward_c8_fused_infer_attention`: Handles `PrefillNoCache` and
`PrefillCacheHit` states.
2. **`quantization/methods/kv_c8.py`** — New
`AscendC8KVCacheAttentionMethod` scheme:
- Creates `k/v_cache_scale/offset` parameters via
`_c8_kv_scale_weight_loader`, which handles per-channel scale shapes and
lazy resizing.
- Sets `layer.kv_cache_torch_dtype = torch.int8` so
`get_kv_cache_spec()` returns INT8 dtype automatically.
- Upgrades `layer.impl` to `AscendC8AttentionBackendImpl` via class
surgery.
3. **`quantization/modelslim_config.py`** — C8 branch in
`get_quant_method()` activates when `kv_cache_type == "C8"` in
`quant_model_description.json`.
4. **`patch/worker/patch_qwen3_c8.py`** — Intercepts per-channel C8
scale/offset weights before `AutoWeightsLoader` discards them, routing
them to the parameters created by `AscendC8KVCacheAttentionMethod`.
5. **`tests/ut/quantization/test_kv_c8.py`** — Unit tests covering
`_c8_kv_scale_weight_loader`, `AscendC8KVCacheAttentionMethod`, and
`AscendC8AttentionBackendImpl` scale helpers.
Yes. Users can now serve Qwen3-32B W8A8C8 quantized models with INT8 KV
cache on Ascend NPU. The model checkpoint must contain a
`quant_model_description.json` with `"kv_cache_type": "C8"` and
per-channel scale/offset tensors in safetensors.
No changes to the serving CLI — the feature activates automatically when
the quantization config is detected.
Benchmarked with `vllm serve` (TP=8, `max_num_seqs=256`,
`max_model_len=131072`, `enable_chunked_prefill=true`) + `random_bench`
(input_len=10240, output_len=2048, 960 prompts, max_concurrency=192):
```
============ Serving Benchmark Result ============
Successful requests: 960
Failed requests: 0
Maximum request concurrency: 192
Benchmark duration (s): 1359.81
Total input tokens: 9830400
Total generated tokens: 1966080
Request throughput (req/s): 0.71
Output token throughput (tok/s): 1445.85
Peak output token throughput (tok/s): 2304.00
Total token throughput (tok/s): 8675.12
---------------Time to First Token----------------
Mean TTFT (ms): 24598.51
Median TTFT (ms): 23167.02
P50 TTFT (ms): 23167.02
P90 TTFT (ms): 47717.08
P99 TTFT (ms): 84402.61
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 120.76
Median TPOT (ms): 121.50
P50 TPOT (ms): 121.50
P90 TPOT (ms): 127.05
P99 TPOT (ms): 130.13
---------------Inter-token Latency----------------
Mean ITL (ms): 120.70
Median ITL (ms): 90.34
P50 ITL (ms): 90.34
P90 ITL (ms): 93.79
P99 ITL (ms): 101.80
==================================================
```
All attention states verified: `PrefillNoCache`, `PrefillCacheHit`,
`ChunkedPrefill`, `DecodeOnly`.
- vLLM version: v0.17.0
- vLLM main:
8b6325758c
Signed-off-by: lico67373 <918688502@qq.com>
Co-authored-by: LICO67373 <110013619+LICO1314@users.noreply.github.com>
### What this PR does / why we need it?
This PR Qwen3.5-27B ;MiniMax-M2.5-w8a8 ;Qwen3.5-397B-w8a8-mtp acc/perf 3
cases on A3, we need test them daily.
- vLLM version: v0.18.0
- vLLM main:
35141a7eed
Signed-off-by: guxin108 <1252896542@qq.com>
### What this PR does / why we need it?
Fix qwen3Next Nightly CI config in 0.18.0.
backport: #7679
Signed-off-by: Your Name <you@example.com>
Co-authored-by: Your Name <you@example.com>
### What this PR does / why we need it?
Add nightly ci test for qwen3vl
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
### What this PR does / why we need it?
Fixed the bug of incorrect reshape usage.
For example:
ori_tensor: [[1, 2, 3], [4, 5, 6]]
after reshape:
[[1, 2], [3, 4], [5, 6]]
after permute:
[[1, 4], [2, 5], [3, 6]]
Now, we will directly use squeeze for a more intuitive understanding.
pr for main:
#7887
### Does this PR introduce _any_ user-facing change?
The actual peak-to-average ratio has successfully decreased.
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
## What this PR does / why we need it?
pick-from:https://github.com/vllm-project/vllm-ascend/pull/7452
### Problem
Embedding models produce inconsistent outputs when prefix caching is
enabled vs disabled.
### Root Cause
The attention router condition was too broad:
- All `model_runner_type == "pooling"` → `_forward_encoder_attention()`
→ uses `npu_fusion_attention`
- **But `npu_fusion_attention` does NOT support prefix caching**
- Result: Numerical mismatch when KV cache is managed by prefix caching
### Solution
Refine the router condition to check causality:
**Before**:
```
if attn_metadata.model_runner_type == "pooling":
→ npu_fusion_attention (no prefix caching support)
```
**After**:
```
if attn_metadata.model_runner_type == "pooling" and not attn_metadata.causal:
→ npu_fusion_attention (for true encoders)
else:
→ npu_fused_infer_attention_score (prefix caching support)
```
### Changes Made
1. **Fixed router condition** (`vllm_ascend/attention/attention_v1.py`
L968)
- Added `and not attn_metadata.causal` check
- Effect: Non-causal embeddings now use correct operator
2. **Simplified encoder attention**
(`vllm_ascend/attention/attention_v1.py` L864-877)
- Removed redundant causal branch (encoders never use causal mask)
- Reduced from 34 lines to 14 lines
3. **Added test** (`tests/e2e/singlecard/pooling/test_embedding.py`)
- Validates embedding outputs with/without prefix caching are consistent
## Does this PR introduce _any_ user-facing change?
### Functional Changes
✅ **Yes** - Bug fix: Embedding models now produce consistent outputs
with prefix caching
### API Changes
❌ **No** - All public APIs unchanged
### Configuration Changes
❌ **No** - No new configuration required
### Backward Compatibility
✅ **Fully compatible** - Only fixes incorrect behavior
## How was this patch tested?
### New Test
Added `test_embed_models_using_prefix_caching_correctness()`:
- Tests: `Qwen3-Embedding-0.6B`
- Validates numerical consistency between runs with/without prefix
caching
- Uses long sequences to activate prefix caching
- Tolerance: 1e-2
- vLLM version: v0.18.0
Signed-off-by: underfituu <hzhucong@163.com>
### What this PR does / why we need it?
Fix the OOM (Out-of-Memory) error in the single-node-deepseek-v3-2-w8a8
nightly test of vllm-ascend:
- Reduced the value of HCCL_BUFFSIZE
- Lowered the gpu-memory-utilization
Optimize service-side performance:
Updated service-oriented configuration parameters (e.g., max-num-seqs,
cudagraph_capture_sizes, batch_size) to improve the inference
performance,so that the performance is closer to the optimal performance
of the current mainline.
Align performance baseline with main branch:
Updated the performance baseline according to the latest performance
data
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
The test has passed.
https://github.com/vllm-project/vllm-ascend/actions/runs/23734079080/job/69134387320?pr=7793
---------
Signed-off-by: wyh145 <1987244901@qq.com>
### What this PR does / why we need it?
Due to the current dcp solution of allgathering the KV cache, the
performance deteriorates significantly, and the CI may get stuck. This
PR temporarily removes the performance and accuracy benchmarks for
DeepSeek-V3.2-W8A8-cp to prevent CI hangs until optimization is
complete.
pcik-from:https://github.com/vllm-project/vllm-ascend/pull/7842
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Verified that the configuration file remains valid and that the CI no
longer attempts to run the problematic benchmarks.
pick-from: https://github.com/vllm-project/vllm-ascend/pull/7842
---------
Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Before when we do put for KV Pool, we find the first non-existing key
and put all the blocks starting from that index; however, if the prefix
cache blocks is from another request, and some of the blocks are evicted
due to LRU, we will be putting blocks that still exist in the pool, and
causing MooncakeStore printing unnecessary logs in master service.
What this PR does:
Now we lookup all the keys and only put the ones that are missing.
Fix lookup_scheduler in pool_worker so it handles GQA correctly.
Fixes a few existing typos
Add UT, written by codex
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>
### What this PR does / why we need it?
Implement get_token_bin_counts_and_mask and apply_penalties with
Triton-Ascend kernels. This significantly reduces latency of the
sampling process when repetition/frequency/presence penalties are
enabled.
Cherry-pick from main PR #7569
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
CI passed.
Signed-off-by: linfeng-yuan <1102311262@qq.com>
Co-authored-by: realliujiaxu <realliujiaxu@163.com>
## Summary
- replace the MiniMax usage accounting monkey patch with a runtime
wrapper implementation instead of source-text rewriting
- preserve MiniMax reasoning-token semantics when `</think>` is missing
by counting the emitted output as reasoning tokens
- add unit coverage for usage tracking helpers and MiniMax
reasoning-token counting
## Why
The previous implementation rewrote `OpenAIServingChat` by matching
exact source blocks. That was brittle against `vllm` source drift and
could crash during early plugin initialization with:
`RuntimeError: Failed to locate expected block while patching
OpenAIServingChat usage accounting.`
This change keeps the usage-accounting backport, but applies it by
wrapping the original stream/full generators and tracking output token
ids at runtime.
For MiniMax reasoning counting, a missing `</think>` should not be
treated as zero reasoning tokens. It can mean the whole output is still
in thinking mode, or that generation stopped before the closing token
was produced. In that case, the emitted output should still be counted
as reasoning.
## Validation
- `pytest -q
tests/ut/patch/platform/test_patch_minimax_usage_accounting.py`
- `vllm serve --help`
Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
Co-authored-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
### What this PR does / why we need it?
This PR backports the changes from #7673 ([Bugfix] support FlashComm1 &
DCP for Qwen) to the releases/v0.18.0 branch.
--------
Signed-off-by: Yang Yuxi <907276627@qq.com>
### What this PR does / why we need it?
cherry-pick from https://github.com/vllm-project/vllm-ascend/pull/7736
**Error information**
When the quantized weights in CompressedTensors format of the kimi-k2
model are used, the following error is reported:
`AttributeError: 'AscendCompressedTensorsConfig' obiect has no attribute
'enabling_fa_quant'`
**Error Cause**
Currently, FA3 quantization supports only the weights of modelslim
quantization. The added methods are not defined in
AscendCompressedTensorsConfig.
**Solution**
Before invoking related methods, check whether the FA3 feature is
enabled.
Additionally, the unused `get_scaled_act_names` method and its
corresponding unit test have been removed.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing unit tests were updated by removing a deprecated test case, and
the refactored logic was reviewed for correctness.
Signed-off-by: Wang Kunpeng <1289706727@qq.com>
### What this PR does / why we need it?
This rebases the GLM47 tool-call parser fix onto `releases/v0.18.0`
after the MiniMax usage-accounting patch merged upstream on March 27,
2026.
It fixes OpenAI chat tool-call streaming for GLM47 by:
- draining terminal parser chunks that contain both the final argument
text and the closing `</tool_call>` suffix
- computing finish backfill from the tool argument bytes actually
emitted to the client, instead of trusting parser-internal buffered
state
- adding focused regression tests for finish backfill and terminal chunk
handling
### Does this PR introduce _any_ user-facing change?
Yes. GLM47 OpenAI-compatible streaming tool-call responses now emit
correct final chunks and argument payloads on `releases/v0.18.0`.
### How was this patch tested?
- `pytest -q tests/ut/patch/platform/test_patch_glm_tool_call_parser.py
tests/ut/patch/platform/test_patch_minimax_usage_accounting.py`
- `python -m pre_commit run --files
vllm_ascend/patch/platform/patch_glm_tool_call_parser.py
tests/ut/patch/platform/test_patch_glm_tool_call_parser.py
vllm_ascend/patch/platform/__init__.py vllm_ascend/patch/__init__.py`
---------
Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
Co-authored-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
### What this PR does / why we need it?
Fixed the issue where the DCP overlaps the MTP scenario in the ds3.2
scenario.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
cherry-pick from: https://github.com/vllm-project/vllm-ascend/pull/7617
Signed-off-by: weiguihua2 <weiguihua2@huawei.com>