Bump v0.9.1rc1 release (#1349)
### What this PR does / why we need it? Bump v0.9.1rc1 release Closes: https://github.com/vllm-project/vllm-ascend/pull/1341 Closes: https://github.com/vllm-project/vllm-ascend/pull/1334 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI passed --------- Signed-off-by: Shanshan Shen <87969357+shen-shanshan@users.noreply.github.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Signed-off-by: leo-pony <nengjunma@outlook.com> Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com> Co-authored-by: leo-pony <nengjunma@outlook.com> Co-authored-by: shen-shanshan <467638484@qq.com>
This commit is contained in:
@@ -6,6 +6,8 @@
|
||||
single_npu
|
||||
single_npu_multimodal
|
||||
multi_npu
|
||||
multi_npu_moge
|
||||
multi_npu_quantization
|
||||
single_node_300i
|
||||
multi_node
|
||||
:::
|
||||
|
||||
117
docs/source/tutorials/multi_npu_moge.md
Normal file
117
docs/source/tutorials/multi_npu_moge.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Multi-NPU (Pangu Pro MoE 72B)
|
||||
|
||||
## Run vllm-ascend on Multi-NPU
|
||||
|
||||
Run docker container:
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
# Update the vllm-ascend image
|
||||
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
|
||||
docker run --rm \
|
||||
--name vllm-ascend \
|
||||
--device /dev/davinci0 \
|
||||
--device /dev/davinci1 \
|
||||
--device /dev/davinci2 \
|
||||
--device /dev/davinci3 \
|
||||
--device /dev/davinci_manager \
|
||||
--device /dev/devmm_svm \
|
||||
--device /dev/hisi_hdc \
|
||||
-v /usr/local/dcmi:/usr/local/dcmi \
|
||||
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
|
||||
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
|
||||
-v /etc/ascend_install.info:/etc/ascend_install.info \
|
||||
-v /root/.cache:/root/.cache \
|
||||
-p 8000:8000 \
|
||||
-it $IMAGE bash
|
||||
```
|
||||
|
||||
Setup environment variables:
|
||||
|
||||
```bash
|
||||
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
|
||||
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
|
||||
```
|
||||
|
||||
### Online Inference on Multi-NPU
|
||||
|
||||
Run the following script to start the vLLM server on Multi-NPU:
|
||||
|
||||
```bash
|
||||
vllm serve /path/to/pangu-pro-moe-model \
|
||||
--tensor-parallel-size 4 \
|
||||
--trust-remote-code \
|
||||
--enforce-eager
|
||||
```
|
||||
|
||||
Once your server is started, you can query the model with input prompts:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/v1/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "/path/to/pangu-pro-moe-model",
|
||||
"prompt": "The future of AI is",
|
||||
"max_tokens": 128,
|
||||
"temperature": 0
|
||||
}'
|
||||
```
|
||||
|
||||
If you run this successfully, you can see the info shown below:
|
||||
|
||||
```json
|
||||
{"id":"cmpl-013558085d774d66bf30c704decb762a","object":"text_completion","created":1750472788,"model":"/path/to/pangu-pro-moe-model","choices":[{"index":0,"text":" not just about creating smarter machines but about fostering collaboration between humans and AI systems. This partnership can lead to more efficient problem-solving, innovative solutions, and a better quality of life for people around the globe.\n\nHowever, achieving this future requires addressing several challenges. Ethical considerations, such as bias in AI algorithms and privacy concerns, must be prioritized. Additionally, ensuring that AI technologies are accessible to all and do not exacerbate existing inequalities is crucial.\n\nIn conclusion, AI stands at the forefront of technological advancement, with vast potential to transform industries and everyday life. By embracing its opportunities while responsibly managing its risks, we can harn","logprobs":null,"finish_reason":"length","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":6,"total_tokens":134,"completion_tokens":128,"prompt_tokens_details":null},"kv_transfer_params":null}
|
||||
```
|
||||
|
||||
### Offline Inference on Multi-NPU
|
||||
|
||||
Run the following script to execute offline inference on multi-NPU:
|
||||
|
||||
```python
|
||||
import gc
|
||||
|
||||
import torch
|
||||
|
||||
from vllm import LLM, SamplingParams
|
||||
from vllm.distributed.parallel_state import (destroy_distributed_environment,
|
||||
destroy_model_parallel)
|
||||
|
||||
def clean_up():
|
||||
destroy_model_parallel()
|
||||
destroy_distributed_environment()
|
||||
gc.collect()
|
||||
torch.npu.empty_cache()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
prompts = [
|
||||
"Hello, my name is",
|
||||
"The future of AI is",
|
||||
]
|
||||
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)
|
||||
|
||||
llm = LLM(model="/path/to/pangu-pro-moe-model",
|
||||
tensor_parallel_size=4,
|
||||
distributed_executor_backend="mp",
|
||||
max_model_len=1024,
|
||||
trust_remote_code=True,
|
||||
enforce_eager=True)
|
||||
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
for output in outputs:
|
||||
prompt = output.prompt
|
||||
generated_text = output.outputs[0].text
|
||||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
||||
|
||||
del llm
|
||||
clean_up()
|
||||
```
|
||||
|
||||
If you run this script successfully, you can see the info shown below:
|
||||
|
||||
```bash
|
||||
Prompt: 'Hello, my name is', Generated text: ' Daniel and I am an 8th grade student at York Middle School. I'
|
||||
Prompt: 'The future of AI is', Generated text: ' following you. As the technology advances, a new report from the Institute for the'
|
||||
```
|
||||
304
docs/source/tutorials/single_node_300i.md
Normal file
304
docs/source/tutorials/single_node_300i.md
Normal file
@@ -0,0 +1,304 @@
|
||||
# Single Node (Atlas 300I series)
|
||||
|
||||
## Run vLLM on Altlas 300I series
|
||||
|
||||
Run docker container:
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
# Update the vllm-ascend image
|
||||
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|-310p
|
||||
docker run --rm \
|
||||
--name vllm-ascend \
|
||||
--device /dev/davinci0 \
|
||||
--device /dev/davinci1 \
|
||||
--device /dev/davinci2 \
|
||||
--device /dev/davinci3 \
|
||||
--device /dev/davinci4 \
|
||||
--device /dev/davinci5 \
|
||||
--device /dev/davinci6 \
|
||||
--device /dev/davinci7 \
|
||||
--device /dev/davinci_manager \
|
||||
--device /dev/devmm_svm \
|
||||
--device /dev/hisi_hdc \
|
||||
-v /usr/local/dcmi:/usr/local/dcmi \
|
||||
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
|
||||
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
|
||||
-v /etc/ascend_install.info:/etc/ascend_install.info \
|
||||
-v /root/.cache:/root/.cache \
|
||||
-p 8000:8000 \
|
||||
-it $IMAGE bash
|
||||
```
|
||||
|
||||
Setup environment variables:
|
||||
|
||||
```bash
|
||||
# Load model from ModelScope to speed up download
|
||||
export VLLM_USE_MODELSCOPE=True
|
||||
|
||||
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
|
||||
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
|
||||
```
|
||||
|
||||
### Online Inference on NPU
|
||||
|
||||
Run the following script to start the vLLM server on NPU(Qwen3-0.6B:1 card, Qwen2.5-7B-Instruct:2 cards, Pangu-Pro-MoE-72B: 8 cards):
|
||||
|
||||
:::::{tab-set}
|
||||
::::{tab-item} Qwen3-0.6B
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
export VLLM_USE_V1=1
|
||||
export MODEL="Qwen/Qwen3-0.6B"
|
||||
python -m vllm.entrypoints.api_server \
|
||||
--model $MODEL \
|
||||
--tensor-parallel-size 1 \
|
||||
--max-num-batched-tokens 2048 \
|
||||
--gpu-memory-utilization 0.5 \
|
||||
--max-num-seqs 4 \
|
||||
--enforce-eager \
|
||||
--trust-remote-code \
|
||||
--max-model-len 1024 \
|
||||
--disable-custom-all-reduce \
|
||||
--dtype float16 \
|
||||
--port 8000 \
|
||||
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}'
|
||||
```
|
||||
::::
|
||||
|
||||
::::{tab-item} Qwen/Qwen2.5-7B-Instruct
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
export VLLM_USE_V1=1
|
||||
export MODEL="Qwen/Qwen2.5-7B-Instruct"
|
||||
python -m vllm.entrypoints.api_server \
|
||||
--model $MODEL \
|
||||
--tensor-parallel-size 2 \
|
||||
--max-num-batched-tokens 2048 \
|
||||
--gpu-memory-utilization 0.5 \
|
||||
--max-num-seqs 4 \
|
||||
--enforce-eager \
|
||||
--trust-remote-code \
|
||||
--max-model-len 1024 \
|
||||
--disable-custom-all-reduce \
|
||||
--dtype float16 \
|
||||
--port 8000 \
|
||||
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}'
|
||||
```
|
||||
::::
|
||||
|
||||
::::{tab-item} Pangu-Pro-MoE-72B
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
# Update the MODEL
|
||||
export MODEL="/path/to/pangu-pro-moe-model"
|
||||
export VLLM_USE_V1=1
|
||||
python -m vllm.entrypoints.api_server \
|
||||
--model $MODEL \
|
||||
--tensor-parallel-size 8 \
|
||||
--max-num-batched-tokens 2048 \
|
||||
--gpu-memory-utilization 0.5 \
|
||||
--max-num-seqs 4 \
|
||||
--enforce-eager \
|
||||
--trust-remote-code \
|
||||
--max-model-len 1024 \
|
||||
--disable-custom-all-reduce \
|
||||
--enable-expert-parallel \
|
||||
--dtype float16 \
|
||||
--port 8000 \
|
||||
--compilation-config '{"custom_ops":["+rms_norm", "+rotary_embedding"]}' \
|
||||
--additional-config '{"ascend_scheduler_config": {"enabled": true, "enable_chunked_prefill": false, "chunked_prefill_enabled": false}}'
|
||||
```
|
||||
::::
|
||||
:::::
|
||||
|
||||
Once your server is started, you can query the model with input prompts
|
||||
|
||||
```bash
|
||||
curl http://localhost:8000/generate \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"prompt": "Hello, my name is ?",
|
||||
"max_tokens": 20,
|
||||
"temperature": 0
|
||||
}'
|
||||
```
|
||||
|
||||
If you run this script successfully, you can see the info shown below:
|
||||
|
||||
```bash
|
||||
{"text":["The future of AI is ? \nA. 充满希望的 \nB. 不确定的 \nC. 危险的 \nD. 无法预测的 \n答案:A \n解析:"]}
|
||||
```
|
||||
|
||||
### Offline Inference
|
||||
|
||||
Run the following script to execute offline inference on NPU:
|
||||
|
||||
:::::{tab-set}
|
||||
::::{tab-item} Qwen3-0.6B
|
||||
|
||||
```{code-block} python
|
||||
:substitutions:
|
||||
from vllm import LLM, SamplingParams
|
||||
import gc
|
||||
import os
|
||||
import torch
|
||||
from vllm import LLM, SamplingParams
|
||||
from vllm.distributed.parallel_state import (destroy_distributed_environment,
|
||||
destroy_model_parallel)
|
||||
os.environ["VLLM_USE_V1"] = "1"
|
||||
def clean_up():
|
||||
destroy_model_parallel()
|
||||
destroy_distributed_environment()
|
||||
gc.collect()
|
||||
torch.npu.empty_cache()
|
||||
prompts = [
|
||||
"Hello, my name is",
|
||||
"The president of the United States is",
|
||||
"The capital of France is",
|
||||
"The future of AI is",
|
||||
]
|
||||
# Create a sampling params object.
|
||||
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
|
||||
# Create an LLM.
|
||||
llm = LLM(
|
||||
model="Qwen/Qwen3-0.6B",
|
||||
max_model_len=4096,
|
||||
max_num_seqs=4,
|
||||
trust_remote_code=True,
|
||||
tensor_parallel_size=1,
|
||||
enforce_eager=True, # For 300I series, only eager mode is supported.
|
||||
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
|
||||
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed
|
||||
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
|
||||
)
|
||||
# Generate texts from the prompts.
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
for output in outputs:
|
||||
prompt = output.prompt
|
||||
generated_text = output.outputs[0].text
|
||||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
||||
del llm
|
||||
clean_up()
|
||||
```
|
||||
|
||||
::::
|
||||
|
||||
::::{tab-item} Qwen2.5-7B-Instruct
|
||||
|
||||
```{code-block} python
|
||||
:substitutions:
|
||||
from vllm import LLM, SamplingParams
|
||||
import gc
|
||||
import os
|
||||
import torch
|
||||
from vllm import LLM, SamplingParams
|
||||
from vllm.distributed.parallel_state import (destroy_distributed_environment,
|
||||
destroy_model_parallel)
|
||||
os.environ["VLLM_USE_V1"] = "1"
|
||||
def clean_up():
|
||||
destroy_model_parallel()
|
||||
destroy_distributed_environment()
|
||||
gc.collect()
|
||||
torch.npu.empty_cache()
|
||||
prompts = [
|
||||
"Hello, my name is",
|
||||
"The president of the United States is",
|
||||
"The capital of France is",
|
||||
"The future of AI is",
|
||||
]
|
||||
# Create a sampling params object.
|
||||
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
|
||||
# Create an LLM.
|
||||
llm = LLM(
|
||||
model="Qwen/Qwen2.5-7B-Instruct",
|
||||
max_model_len=4096,
|
||||
max_num_seqs=4,
|
||||
trust_remote_code=True,
|
||||
tensor_parallel_size=2,
|
||||
enforce_eager=True, # For 300I series, only eager mode is supported.
|
||||
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
|
||||
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed
|
||||
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
|
||||
)
|
||||
# Generate texts from the prompts.
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
for output in outputs:
|
||||
prompt = output.prompt
|
||||
generated_text = output.outputs[0].text
|
||||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
||||
del llm
|
||||
clean_up()
|
||||
```
|
||||
|
||||
::::
|
||||
|
||||
::::{tab-item} Pangu-72B-MoE
|
||||
```{code-block} python
|
||||
:substitutions:
|
||||
import gc
|
||||
import os
|
||||
import torch
|
||||
from vllm import LLM, SamplingParams
|
||||
from vllm.distributed.parallel_state import (destroy_distributed_environment,
|
||||
destroy_model_parallel)
|
||||
def clean_up():
|
||||
destroy_model_parallel()
|
||||
destroy_distributed_environment()
|
||||
gc.collect()
|
||||
torch.npu.empty_cache()
|
||||
os.environ["VLLM_USE_V1"] = "1"
|
||||
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"
|
||||
if __name__ == "__main__":
|
||||
# Update the model_path
|
||||
model_path="/path/to/pangu-pro-moe-model"
|
||||
prompts = [
|
||||
"Hello, my name is",
|
||||
"The future of AI is",
|
||||
]
|
||||
sampling_params = SamplingParams(min_tokens=8, max_tokens=8, temperature=0.0)
|
||||
llm = LLM(model=model_path,
|
||||
tensor_parallel_size=8,
|
||||
max_num_batched_tokens=2048,
|
||||
gpu_memory_utilization=0.5,
|
||||
max_num_seqs=4,
|
||||
enforce_eager=True, # For 300I series, only eager mode is supported.
|
||||
trust_remote_code=True,
|
||||
max_model_len=1024,
|
||||
disable_custom_all_reduce=True, # IMPORTANT cause 300I series needed custom ops
|
||||
enable_expert_parallel=True,
|
||||
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
|
||||
compilation_config={"custom_ops":["+rms_norm", "+rotary_embedding"]}, # IMPORTANT cause 300I series needed custom ops
|
||||
additional_config = {
|
||||
'ascend_scheduler_config': {
|
||||
'enabled': True,
|
||||
'enable_chunked_prefill' : False,
|
||||
'chunked_prefill_enabled': False
|
||||
}
|
||||
}
|
||||
)
|
||||
# Generate texts from the prompts.
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
for output in outputs:
|
||||
prompt = output.prompt
|
||||
generated_text = output.outputs[0].text
|
||||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
||||
del llm
|
||||
clean_up()
|
||||
```
|
||||
|
||||
::::
|
||||
:::::
|
||||
|
||||
If you run this script successfully, you can see the info shown below:
|
||||
|
||||
```bash
|
||||
Prompt: 'Hello, my name is', Generated text: " Lina. I'm a 22-year-old student from China. I'm interested in studying in the US. I'm looking for a job in the US. I want to know if there are any opportunities in the US for me to work. I'm also interested in the culture and lifestyle in the US. I want to know if there are any opportunities for me to work in the US. I'm also interested in the culture and lifestyle in the US. I'm interested in the culture"
|
||||
Prompt: 'The president of the United States is', Generated text: ' the same as the president of the United Nations. This is because the president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president of the United States is the same as the president of the United Nations. The president'
|
||||
Prompt: 'The capital of France is', Generated text: ' Paris. The capital of Italy is Rome. The capital of Spain is Madrid. The capital of China is Beijing. The capital of Japan is Tokyo. The capital of India is New Delhi. The capital of Brazil is Brasilia. The capital of Egypt is Cairo. The capital of South Africa is Cape Town. The capital of Nigeria is Abuja. The capital of Lebanon is Beirut. The capital of Morocco is Rabat. The capital of Indonesia is Jakarta. The capital of Peru is Lima. The'
|
||||
Prompt: 'The future of AI is', Generated text: " not just about the technology itself, but about how we use it to solve real-world problems. As AI continues to evolve, it's important to consider the ethical implications of its use. AI has the potential to bring about significant changes in society, but it also has the power to create new challenges. Therefore, it's crucial to develop a comprehensive approach to AI that takes into account both the benefits and the risks associated with its use. This includes addressing issues such as bias, privacy, and accountability."
|
||||
```
|
||||
Reference in New Issue
Block a user