2025-04-19 17:38:18 +08:00
|
|
|
#
|
|
|
|
|
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
|
2025-04-21 19:25:51 +08:00
|
|
|
# Adapted from vllm/model_executor/models/deepseek_mtp.py
|
2025-04-19 17:38:18 +08:00
|
|
|
# Copyright 2023 The vLLM team.
|
|
|
|
|
#
|
|
|
|
|
# This file is a part of the vllm-ascend project.
|
|
|
|
|
#
|
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
|
#
|
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
#
|
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
|
# limitations under the License.
|
|
|
|
|
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
from typing import List, Optional
|
2025-04-19 17:38:18 +08:00
|
|
|
|
|
|
|
|
import torch
|
|
|
|
|
import torch.nn as nn
|
|
|
|
|
from transformers import PretrainedConfig
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
from vllm.attention.backends.abstract import AttentionMetadata
|
2025-04-19 17:38:18 +08:00
|
|
|
from vllm.config import CacheConfig, ModelConfig, VllmConfig
|
|
|
|
|
from vllm.model_executor.layers.layernorm import RMSNorm
|
|
|
|
|
from vllm.model_executor.layers.logits_processor import LogitsProcessor
|
|
|
|
|
from vllm.model_executor.layers.quantization import QuantizationConfig
|
|
|
|
|
from vllm.model_executor.layers.sampler import get_sampler
|
2025-07-29 18:51:57 +08:00
|
|
|
from vllm.model_executor.layers.vocab_parallel_embedding import (
|
|
|
|
|
ParallelLMHead, VocabParallelEmbedding)
|
2025-04-19 17:38:18 +08:00
|
|
|
from vllm.model_executor.models.deepseek_mtp import (
|
|
|
|
|
DeepSeekMTP, DeepSeekMultiTokenPredictor, DeepSeekMultiTokenPredictorLayer,
|
|
|
|
|
SharedHead)
|
|
|
|
|
from vllm.model_executor.models.utils import maybe_prefix
|
|
|
|
|
from vllm.model_executor.sampling_metadata import SamplingMetadata
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
from vllm.sequence import IntermediateTensors
|
2025-04-19 17:38:18 +08:00
|
|
|
|
|
|
|
|
from .deepseek_v2 import CustomDeepseekV2DecoderLayer
|
|
|
|
|
|
|
|
|
|
|
2025-07-29 18:51:57 +08:00
|
|
|
class CustomDeepSeekShareHead(SharedHead):
|
|
|
|
|
|
|
|
|
|
def __init__(self,
|
|
|
|
|
config: PretrainedConfig,
|
|
|
|
|
quant_config: Optional[QuantizationConfig] = None,
|
|
|
|
|
prefix: str = "") -> None:
|
|
|
|
|
nn.Module.__init__(self)
|
|
|
|
|
self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
|
|
|
|
self.head = ParallelLMHead(config.vocab_size,
|
|
|
|
|
config.hidden_size,
|
|
|
|
|
quant_config=quant_config,
|
|
|
|
|
prefix=maybe_prefix(prefix, "head"))
|
|
|
|
|
|
|
|
|
|
|
2025-04-19 17:38:18 +08:00
|
|
|
class CustomDeepSeekMultiTokenPredictorLayer(DeepSeekMultiTokenPredictorLayer):
|
|
|
|
|
|
|
|
|
|
def __init__(
|
|
|
|
|
self,
|
|
|
|
|
config: PretrainedConfig,
|
|
|
|
|
prefix: str,
|
|
|
|
|
model_config: ModelConfig,
|
|
|
|
|
cache_config: Optional[CacheConfig] = None,
|
|
|
|
|
quant_config: Optional[QuantizationConfig] = None,
|
|
|
|
|
) -> None:
|
|
|
|
|
nn.Module.__init__(self)
|
|
|
|
|
|
|
|
|
|
self.enorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
|
|
|
|
self.hnorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
|
|
|
|
|
self.eh_proj = nn.Linear(config.hidden_size * 2,
|
|
|
|
|
config.hidden_size,
|
|
|
|
|
bias=False)
|
2025-07-29 18:51:57 +08:00
|
|
|
self.shared_head = CustomDeepSeekShareHead(config=config,
|
|
|
|
|
quant_config=quant_config,
|
|
|
|
|
prefix=maybe_prefix(
|
|
|
|
|
prefix, "shared_head"))
|
2025-04-19 17:38:18 +08:00
|
|
|
self.mtp_block = CustomDeepseekV2DecoderLayer(config, prefix,
|
|
|
|
|
model_config,
|
|
|
|
|
cache_config,
|
|
|
|
|
quant_config)
|
|
|
|
|
|
|
|
|
|
def forward(
|
|
|
|
|
self,
|
|
|
|
|
input_ids: torch.Tensor,
|
|
|
|
|
positions: torch.Tensor,
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
kv_cache: torch.Tensor,
|
|
|
|
|
attn_metadata: AttentionMetadata,
|
2025-04-19 17:38:18 +08:00
|
|
|
previous_hidden_states: torch.Tensor,
|
|
|
|
|
inputs_embeds: Optional[torch.Tensor] = None,
|
|
|
|
|
spec_step_index: int = 0,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
assert inputs_embeds is not None
|
|
|
|
|
# masking inputs at position 0, as not needed by MTP
|
|
|
|
|
inputs_embeds = torch.where((positions == 0).unsqueeze(-1),
|
|
|
|
|
torch.zeros_like(inputs_embeds),
|
|
|
|
|
inputs_embeds)
|
|
|
|
|
inputs_embeds = self.enorm(inputs_embeds)
|
|
|
|
|
previous_hidden_states = self.hnorm(previous_hidden_states)
|
|
|
|
|
|
|
|
|
|
hidden_states = self.eh_proj(
|
|
|
|
|
torch.cat([inputs_embeds, previous_hidden_states], dim=-1))
|
|
|
|
|
|
|
|
|
|
hidden_states, residual = self.mtp_block(positions=positions,
|
|
|
|
|
hidden_states=hidden_states,
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
kv_cache=kv_cache,
|
|
|
|
|
attn_metadata=attn_metadata,
|
2025-04-19 17:38:18 +08:00
|
|
|
residual=None)
|
|
|
|
|
hidden_states = residual + hidden_states
|
|
|
|
|
return hidden_states
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class CustomDeepSeekMultiTokenPredictor(DeepSeekMultiTokenPredictor):
|
|
|
|
|
|
|
|
|
|
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
|
|
|
|
|
nn.Module.__init__(self)
|
|
|
|
|
config = vllm_config.model_config.hf_config
|
|
|
|
|
self.mtp_start_layer_idx = config.num_hidden_layers
|
|
|
|
|
self.num_mtp_layers = config.num_nextn_predict_layers
|
|
|
|
|
# to map the exact layer index from weights
|
|
|
|
|
self.layers = torch.nn.ModuleDict({
|
[1/N][CI] Move linting system to pre-commits hooks (#1256)
### What this PR does / why we need it?
Follow vllm-project/vllm lint way:
https://github.com/vllm-project/vllm/blob/main/.pre-commit-config.yaml
Enable pre-commit to avoid some low level error AMAP.
This pr is one step of #1241, The purpose is make linting system more
clear and convenient, on this step, Mainly did the following things:
yapf, actionlint, ruff, typos, isort, mypy, png-lint, signoff-commit,
enforce-import-regex-instead-of-re.
TODO:
- clang-format(check for csrc with google style)
need clean code, disable for now
- pymarkdown
need clean code, disable for now
- shellcheck
need clean code, disable for now
### Does this PR introduce _any_ user-facing change?
Only developer UX change:
https://vllm-ascend--1256.org.readthedocs.build/en/1256/developer_guide/contributing.html#run-lint-locally
```
pip install -r requirements-lint.txt && pre-commit install
bash format.sh
```
### How was this patch tested?
CI passed with new added/existing test.
Co-authored-by: Yikun [yikunkero@gmail.com](mailto:yikunkero@gmail.com)
Co-authored-by: wangli
[wangli858794774@gmail.com](mailto:wangli858794774@gmail.com)
- vLLM version: v0.9.1
- vLLM main:
https://github.com/vllm-project/vllm/commit/5358cce5ffbd4011f8fea2188995a249b43b8bfe
---------
Signed-off-by: wangli <wangli858794774@gmail.com>
2025-07-10 14:17:15 +08:00
|
|
|
str(idx):
|
|
|
|
|
CustomDeepSeekMultiTokenPredictorLayer(
|
2025-04-19 17:38:18 +08:00
|
|
|
config,
|
|
|
|
|
f"{prefix}.layers.{idx}",
|
|
|
|
|
model_config=vllm_config.model_config,
|
|
|
|
|
cache_config=vllm_config.cache_config,
|
|
|
|
|
quant_config=vllm_config.quant_config,
|
|
|
|
|
)
|
|
|
|
|
for idx in range(self.mtp_start_layer_idx,
|
|
|
|
|
self.mtp_start_layer_idx + self.num_mtp_layers)
|
|
|
|
|
})
|
[V1] MTP supports torchair (#2145)
### What this PR does / why we need it?
Support MTP with:
- [x] V0 Scheduler
- [x] TorchAir
- [x] Single DP
- [x] Multi DP
- [x] Disaggregate PD
Known issues:
- [ ] Not support V1 Scheduler (chunked prefill), will be supported in a
few weeks
- [ ] vllm v0.10.0 does not support metrics with `DP > 1` right now,
need to comment out the line 171-175 in file
`vllm/vllm/v1/metrics/loggers.py`
```
if (len(self.engine_indexes) > 1
and vllm_config.speculative_config is not None):
raise NotImplementedError("Prometheus metrics with Spec Decoding "
"with >1 EngineCore per AsyncLLM is not "
"supported yet.")
```
To start an online server with torchair enabled, here is an example:
```
python -m vllm.entrypoints.openai.api_server \
--model="/weights/DeepSeek-R1_w8a8/" \
--trust-remote-code \
--max-model-len 40000 \
--tensor-parallel-size 4 \
--data_parallel_size 4 \
--max-num-seqs 16 \
--no-enable-prefix-caching \
--enable_expert_parallel \
--served-model-name deepseekr1 \
--speculative-config '{"num_speculative_tokens": 1, "method":"deepseek_mtp"}' \
--quantization ascend \
--host 0.0.0.0 \
--port 1234 \
--additional-config '{"ascend_scheduler_config":{"enabled":true,"enable_chunked_prefill":false},"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]},"enable_weight_nz_layout":true}' \
--gpu_memory_utilization 0.9
```
offline example with torchair enabled
```
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=16, temperature=0)
# Create an LLM.
llm = LLM(
model="/home/data/DeepSeek-R1_w8a8/",
tensor_parallel_size=16,
max_num_seqs=16,
gpu_memory_utilization=0.9,
distributed_executor_backend="mp",
enable_expert_parallel=True,
speculative_config={
"method": "deepseek_mtp",
"num_speculative_tokens": 1,
},
trust_remote_code=True,
enforce_eager=False,
max_model_len=2000,
additional_config = {
'torchair_graph_config': {
'enabled': True,
"graph_batch_sizes": [16],
'enable_multistream_shared_expert': False,
},
"ascend_scheduler_config": {
"enabled": True
},
# 'expert_tensor_parallel_size': 16,
}
)
# Generate texts from the prompts.
# llm.start_profile()
outputs = llm.generate(prompts, sampling_params)
# llm.stop_profile()
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
- vLLM version: v0.10.0
- vLLM main:
https://github.com/vllm-project/vllm/commit/302962e806e9820643ae25987e8e38ed035e05d3
---------
Signed-off-by: xuyexiong <xuyexiong@huawei.com>
2025-08-06 19:37:43 +08:00
|
|
|
self.embed_tokens = VocabParallelEmbedding(
|
|
|
|
|
config.vocab_size,
|
|
|
|
|
config.hidden_size,
|
|
|
|
|
)
|
2025-04-19 17:38:18 +08:00
|
|
|
|
|
|
|
|
# Note: torch._dynamo.exc.Unsupported: builtin: str
|
|
|
|
|
self.layers_list = [
|
|
|
|
|
self.layers[str(idx)]
|
|
|
|
|
for idx in range(self.mtp_start_layer_idx,
|
|
|
|
|
self.mtp_start_layer_idx + self.num_mtp_layers)
|
|
|
|
|
]
|
|
|
|
|
self.logits_processor = LogitsProcessor(config.vocab_size)
|
|
|
|
|
|
|
|
|
|
def forward(
|
|
|
|
|
self,
|
|
|
|
|
input_ids: torch.Tensor,
|
|
|
|
|
positions: torch.Tensor,
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
kv_caches: torch.Tensor,
|
|
|
|
|
attn_metadata: AttentionMetadata,
|
2025-04-19 17:38:18 +08:00
|
|
|
previous_hidden_states: torch.Tensor,
|
|
|
|
|
inputs_embeds: Optional[torch.Tensor] = None,
|
|
|
|
|
spec_step_idx: int = 0,
|
|
|
|
|
) -> torch.Tensor:
|
[V1] MTP supports torchair (#2145)
### What this PR does / why we need it?
Support MTP with:
- [x] V0 Scheduler
- [x] TorchAir
- [x] Single DP
- [x] Multi DP
- [x] Disaggregate PD
Known issues:
- [ ] Not support V1 Scheduler (chunked prefill), will be supported in a
few weeks
- [ ] vllm v0.10.0 does not support metrics with `DP > 1` right now,
need to comment out the line 171-175 in file
`vllm/vllm/v1/metrics/loggers.py`
```
if (len(self.engine_indexes) > 1
and vllm_config.speculative_config is not None):
raise NotImplementedError("Prometheus metrics with Spec Decoding "
"with >1 EngineCore per AsyncLLM is not "
"supported yet.")
```
To start an online server with torchair enabled, here is an example:
```
python -m vllm.entrypoints.openai.api_server \
--model="/weights/DeepSeek-R1_w8a8/" \
--trust-remote-code \
--max-model-len 40000 \
--tensor-parallel-size 4 \
--data_parallel_size 4 \
--max-num-seqs 16 \
--no-enable-prefix-caching \
--enable_expert_parallel \
--served-model-name deepseekr1 \
--speculative-config '{"num_speculative_tokens": 1, "method":"deepseek_mtp"}' \
--quantization ascend \
--host 0.0.0.0 \
--port 1234 \
--additional-config '{"ascend_scheduler_config":{"enabled":true,"enable_chunked_prefill":false},"torchair_graph_config":{"enabled":true,"graph_batch_sizes":[16]},"enable_weight_nz_layout":true}' \
--gpu_memory_utilization 0.9
```
offline example with torchair enabled
```
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=16, temperature=0)
# Create an LLM.
llm = LLM(
model="/home/data/DeepSeek-R1_w8a8/",
tensor_parallel_size=16,
max_num_seqs=16,
gpu_memory_utilization=0.9,
distributed_executor_backend="mp",
enable_expert_parallel=True,
speculative_config={
"method": "deepseek_mtp",
"num_speculative_tokens": 1,
},
trust_remote_code=True,
enforce_eager=False,
max_model_len=2000,
additional_config = {
'torchair_graph_config': {
'enabled': True,
"graph_batch_sizes": [16],
'enable_multistream_shared_expert': False,
},
"ascend_scheduler_config": {
"enabled": True
},
# 'expert_tensor_parallel_size': 16,
}
)
# Generate texts from the prompts.
# llm.start_profile()
outputs = llm.generate(prompts, sampling_params)
# llm.stop_profile()
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
- vLLM version: v0.10.0
- vLLM main:
https://github.com/vllm-project/vllm/commit/302962e806e9820643ae25987e8e38ed035e05d3
---------
Signed-off-by: xuyexiong <xuyexiong@huawei.com>
2025-08-06 19:37:43 +08:00
|
|
|
if inputs_embeds is None:
|
|
|
|
|
inputs_embeds = self.embed_tokens(input_ids)
|
2025-04-19 17:38:18 +08:00
|
|
|
current_step_idx = (spec_step_idx % self.num_mtp_layers)
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
step_kv_cache = kv_caches[
|
|
|
|
|
current_step_idx] if kv_caches is not None else None
|
2025-04-19 17:38:18 +08:00
|
|
|
return self.layers_list[current_step_idx](
|
|
|
|
|
input_ids,
|
|
|
|
|
positions,
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
step_kv_cache,
|
|
|
|
|
attn_metadata,
|
2025-04-19 17:38:18 +08:00
|
|
|
previous_hidden_states,
|
|
|
|
|
inputs_embeds,
|
|
|
|
|
current_step_idx,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def compute_logits(
|
|
|
|
|
self,
|
|
|
|
|
hidden_states: torch.Tensor,
|
|
|
|
|
sampling_metadata: SamplingMetadata,
|
|
|
|
|
spec_step_idx: int = 0,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
current_step_idx = (spec_step_idx % self.num_mtp_layers)
|
|
|
|
|
mtp_layer = self.layers_list[current_step_idx]
|
|
|
|
|
logits = self.logits_processor(mtp_layer.shared_head.head,
|
|
|
|
|
mtp_layer.shared_head(hidden_states),
|
|
|
|
|
sampling_metadata)
|
|
|
|
|
return logits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class CustomDeepSeekMTP(DeepSeekMTP):
|
2025-04-21 19:25:51 +08:00
|
|
|
# NOTE 1.The quantized MTP layer of deepseek on the NPU is not quantized;
|
|
|
|
|
# NOTE 2.The description file generated by the current msmodelslim tool does not have
|
|
|
|
|
# MTP layer info. Please manually add it and set the value to FLOAT.
|
|
|
|
|
packed_modules_mapping = {
|
|
|
|
|
"gate_up_proj": ["gate_proj", "up_proj"],
|
|
|
|
|
"experts":
|
|
|
|
|
["experts.0.gate_proj", "experts.0.up_proj", "experts.0.down_proj"]
|
|
|
|
|
}
|
2025-04-19 17:38:18 +08:00
|
|
|
|
|
|
|
|
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
|
|
|
|
|
nn.Module.__init__(self)
|
|
|
|
|
self.config = vllm_config.model_config.hf_config
|
|
|
|
|
self.model = CustomDeepSeekMultiTokenPredictor(vllm_config=vllm_config,
|
|
|
|
|
prefix=maybe_prefix(
|
|
|
|
|
prefix, "model"))
|
|
|
|
|
|
|
|
|
|
self.sampler = get_sampler()
|
[MTP] follow custom deepseek modeling changes to support graph mode (#636)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
As custom deepseek modeling do some changes to support graph mode in
https://github.com/vllm-project/vllm-ascend/pull/585, so i follow it to
change custom deepseek_mtp modeling.
And some modifications for k>1 were not carried over by the
https://github.com/vllm-project/vllm-ascend/pull/429, now i add it.
In order to better take care of the MTP feature in the vllm-ascend
repository, I added cases related to graph mode(torchair), but i skip it
since torchair can not correctly clean up memory in vllmrunner.
Also i add some case for MTP quantization weights, but test weight is
not ready, so i skip it and i will open it when test quant weights is
ready.
https://github.com/vllm-project/vllm-ascend/pull/648 did not completely
fix the sample
change(https://github.com/vllm-project/vllm-ascend/issues/660) issue, I
added the relevant changes.
### Does this PR introduce _any_ user-facing change?
now, u can use following method to use mtp in deepseek v3/r1 float or
quant weights with eager mode.
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
enforce_eager=True,
trust_remote_code=True,
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
or use mtp in deepseek v3/r1 float or quant weights with graph
mode(torchair)
```python
llm = LLM(
model="wemaster/deepseek_mtp_main_random_bf16",
tensor_parallel_size=2,
speculative_config={
"num_speculative_tokens": 1,
},
trust_remote_code=True,
additional_config={
'enable_graph_mode': True,
},
disable_log_stats=False,
gpu_memory_utilization=0.8,
max_model_len=64,
)
```
add notes:
1. now, we support k>1, so u can set num_speculative_tokens > 1 if there
is sufficient redundant computing power;
2. MTP is not supported in V1, we will support it when vLLM does it in
https://github.com/vllm-project/vllm/issues/13500.
3. if u run MTP failed by `segmentation fault`, u can follow v0.7.3
patch https://github.com/vllm-project/vllm-ascend/pull/236 file
`vllm_ascend/patch/patch_metrics.py` method
`__npu_async_metrics_collector_init__`
### How was this patch tested?
local tested passed and test by CI
Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-04-28 21:18:53 +08:00
|
|
|
|
|
|
|
|
def forward(
|
|
|
|
|
self,
|
|
|
|
|
input_ids: torch.Tensor,
|
|
|
|
|
positions: torch.Tensor,
|
|
|
|
|
kv_caches: Optional[List[torch.Tensor]] = None,
|
|
|
|
|
attn_metadata: Optional[AttentionMetadata] = None,
|
|
|
|
|
previous_hidden_states: Optional[torch.Tensor] = None,
|
|
|
|
|
intermediate_tensors: Optional[IntermediateTensors] = None,
|
|
|
|
|
inputs_embeds: Optional[torch.Tensor] = None,
|
|
|
|
|
spec_step_idx: int = 0,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
hidden_states = self.model(input_ids, positions, kv_caches,
|
|
|
|
|
attn_metadata, previous_hidden_states,
|
|
|
|
|
inputs_embeds, spec_step_idx)
|
2025-08-22 17:09:08 +08:00
|
|
|
return hidden_states
|