diff --git a/docs/source/faqs.md b/docs/source/faqs.md
index 77260c6..9af8ce8 100644
--- a/docs/source/faqs.md
+++ b/docs/source/faqs.md
@@ -28,34 +28,13 @@ You can get our containers at `Quay.io`, e.g., [vllm-ascend](https://quay
If you are in China, you can use `daocloud` to accelerate your downloading:
-1) Open `daemon.json`:
-
```bash
-vi /etc/docker/daemon.json
+docker pull m.daocloud.io/quay.io/ascend/vllm-ascend:v0.7.3rc2
```
-2) Add `https://docker.m.daocloud.io` to `registry-mirrors`:
-
-```json
-{
- "registry-mirrors": [
- "https://docker.m.daocloud.io"
- ]
-}
-```
-
-3) Restart your docker service:
-
-```bash
-sudo systemctl daemon-reload
-sudo systemctl restart docker
-```
-
-After configuration, you can download our container from `m.daocloud.io/quay.io/ascend/vllm-ascend:v0.7.3rc2`.
-
### 3. What models does vllm-ascend supports?
-Currently, we have already fully tested and supported `Qwen` / `Deepseek` (V0 only) / `Llama` models, other models we have tested are shown [here](https://vllm-ascend.readthedocs.io/en/latest/user_guide/supported_models.html). Plus, according to users' feedback, `gemma3` and `glm4` are not supported yet. Besides, more models need test.
+Find more details [here](https://vllm-ascend.readthedocs.io/en/latest/user_guide/supported_models.html).
### 4. How to get in touch with our community?