[Doc] Update FAQ doc (#561)

### What this PR does / why we need it?
Update FAQ doc to make `docker pull` more clear


Signed-off-by: shen-shanshan <467638484@qq.com>
This commit is contained in:
Shanshan Shen
2025-04-18 13:13:13 +08:00
committed by GitHub
parent 84563fc65d
commit 7eeff60715

View File

@@ -28,34 +28,13 @@ You can get our containers at `Quay.io`, e.g., [<u>vllm-ascend</u>](https://quay
If you are in China, you can use `daocloud` to accelerate your downloading:
1) Open `daemon.json`:
```bash
vi /etc/docker/daemon.json
docker pull m.daocloud.io/quay.io/ascend/vllm-ascend:v0.7.3rc2
```
2) Add `https://docker.m.daocloud.io` to `registry-mirrors`:
```json
{
"registry-mirrors": [
"https://docker.m.daocloud.io"
]
}
```
3) Restart your docker service:
```bash
sudo systemctl daemon-reload
sudo systemctl restart docker
```
After configuration, you can download our container from `m.daocloud.io/quay.io/ascend/vllm-ascend:v0.7.3rc2`.
### 3. What models does vllm-ascend supports?
Currently, we have already fully tested and supported `Qwen` / `Deepseek` (V0 only) / `Llama` models, other models we have tested are shown [<u>here</u>](https://vllm-ascend.readthedocs.io/en/latest/user_guide/supported_models.html). Plus, according to users' feedback, `gemma3` and `glm4` are not supported yet. Besides, more models need test.
Find more details [<u>here</u>](https://vllm-ascend.readthedocs.io/en/latest/user_guide/supported_models.html).
### 4. How to get in touch with our community?