# FAQs ## Version Specific FAQs - [[v0.7.1rc1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/19) - [[v0.7.3rc1] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/267) - [[v0.7.3rc2] FAQ & Feedback](https://github.com/vllm-project/vllm-ascend/issues/418) ## General FAQs ### 1. What devices are currently supported? Currently, **ONLY Atlas A2 series** (Ascend-cann-kernels-910b) are supported: - Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2) - Atlas 800I A2 Inference series (Atlas 800I A2) Below series are NOT supported yet: - Atlas 300I Duo态Atlas 300I Pro (Ascend-cann-kernels-310p) might be supported on 2025.Q2 - Atlas 200I A2 (Ascend-cann-kernels-310b) unplanned yet - Ascend 910, Ascend 910 Pro B (Ascend-cann-kernels-910) unplanned yet From a technical view, vllm-ascend support would be possible if the torch-npu is supported. Otherwise, we have to implement it by using custom ops. We are also welcome to join us to improve together. ### 2. How to get our docker containers? You can get our containers at `Quay.io`, e.g., [vllm-ascend](https://quay.io/repository/ascend/vllm-ascend?tab=tags) and [cann](https://quay.io/repository/ascend/cann?tab=tags). If you are in China, you can use `daocloud` to accelerate your downloading: 1) Open `daemon.json`: ```bash vi /etc/docker/daemon.json ``` 2) Add `https://docker.m.daocloud.io` to `registry-mirrors`: ```json { "registry-mirrors": [ "https://docker.m.daocloud.io" ] } ``` 3) Restart your docker service: ```bash sudo systemctl daemon-reload sudo systemctl restart docker ``` After configuration, you can download our container from `m.daocloud.io/quay.io/ascend/vllm-ascend:v0.7.3rc2`. ### 3. What models does vllm-ascend supports? Currently, we have already fully tested and supported `Qwen` / `Deepseek` (V0 only) / `Llama` models, other models we have tested are shown [here](https://vllm-ascend.readthedocs.io/en/latest/user_guide/supported_models.html). Plus, accoding to users' feedback, `gemma3` and `glm4` are not supported yet. Besides, more models need test. ### 4. How to get in touch with our community? There are many channels that you can communicate with our community developers / users: - Submit a GitHub [issue](https://github.com/vllm-project/vllm-ascend/issues?page=1). - Join our [weekly meeting](https://docs.google.com/document/d/1hCSzRTMZhIB8vRq1_qOOjx4c9uYUxvdQvDsMV2JcSrw/edit?tab=t.0#heading=h.911qu8j8h35z) and share your ideas. - Join our [WeChat](https://github.com/vllm-project/vllm-ascend/issues/227) group and ask your quenstions. - Join our ascend channel in [vLLM forums](https://discuss.vllm.ai/c/hardware-support/vllm-ascend-support/6) and publish your topics. ### 5. What features does vllm-ascend V1 supports? Find more details [here](https://github.com/vllm-project/vllm-ascend/issues/414).