quant_action.choices to avoid TypeError (#1046)
### What this PR does / why we need it?
When I run vllm-ascend, I get this error msg:
```bash
Traceback (most recent call last):
File "/home/sss/software/miniconda3/envs/vllm-v1/bin/vllm", line 8, in <module>
sys.exit(main())
File "/home/sss/github/vllm-project/vllm/vllm/entrypoints/cli/main.py", line 50, in main
cmd.subparser_init(subparsers).set_defaults(
File "/home/sss/github/vllm-project/vllm/vllm/entrypoints/cli/serve.py", line 101, in subparser_init
serve_parser = make_arg_parser(serve_parser)
File "/home/sss/github/vllm-project/vllm/vllm/entrypoints/openai/cli_args.py", line 254, in make_arg_parser
parser = AsyncEngineArgs.add_cli_args(parser)
File "/home/sss/github/vllm-project/vllm/vllm/engine/arg_utils.py", line 1582, in add_cli_args
current_platform.pre_register_and_update(parser)
File "/home/sss/github/vllm-project/vllm-ascend/vllm_ascend/platform.py", line 80, in pre_register_and_update
if ASCEND_QUATIZATION_METHOD not in quant_action.choices:
TypeError: argument of type 'NoneType' is not iterable
[ERROR] 2025-06-03-02:53:42 (PID:6005, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception
```
This is because the `choices` attribute in `quant_action` can be `None`
and we don't check it.
```bash
# quant_action
_StoreAction(option_strings=['--quantization', '-q'], dest='quantization', nargs=None, const=None, default=None, type=<class 'str'>, choices=None, required=False, help='Method used to quantize the weights. If `None`, we first check the\n`quantization_config` attribute in the model config file. If that is\n`None`, we assume the model weights are not quantized and use `dtype` to\ndetermine the data type of the weights.', metavar=None)
```
Thus, I have added check for the `choices` to handle the scenario of
`choices=None`.
### Does this PR introduce _any_ user-facing change?
yes, vllm server with ascend quantization works now.
### How was this patch tested?
by `vllm server --quantization ascend` command.
Related: https://github.com/vllm-project/vllm/issues/19004
Signed-off-by: shen-shanshan <467638484@qq.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
vLLM Ascend Plugin
| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |
English | 中文
Latest News 🔥
- [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
- [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
Overview
vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.
It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
Prerequisites
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
- OS: Linux
- Software:
- Python >= 3.9, < 3.12
- CANN >= 8.1.RC1
- PyTorch >= 2.5.1, torch-npu >= 2.5.1
- vLLM (the same version as vllm-ascend)
Getting Started
Please refer to QuickStart and Installation for more details.
Contributing
See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.
We welcome and value any contributions and collaborations:
- Please let us know if you encounter a bug by filing an issue
- Please use User forum for usage questions and help.
Branch
vllm-ascend has main branch and dev branch.
- main: main branch,corresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
- vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example,
v0.7.3-devis the dev branch for vLLMv0.7.3version.
Below is maintained branches:
| Branch | Status | Note |
|---|---|---|
| main | Maintained | CI commitment for vLLM main branch and vLLM 0.9.x branch |
| v0.7.1-dev | Unmaintained | Only doc fixed is allowed |
| v0.7.3-dev | Maintained | CI commitment for vLLM 0.7.3 version |
Please refer to Versioning policy for more details.
Weekly Meeting
- vLLM Ascend Weekly Meeting: https://tinyurl.com/vllm-ascend-meeting
- Wednesday, 15:00 - 16:00 (UTC+8, Convert to your timezone)
License
Apache License 2.0, as found in the LICENSE file.
