From b71f193cb0a4758a4db2f06a8261fa86d1d9f0f1 Mon Sep 17 00:00:00 2001 From: Mengqing Cao Date: Thu, 17 Apr 2025 19:32:20 +0800 Subject: [PATCH] [Model][Doc] Update model support list (#552) Update model support list cc @Yikun plz help review, thanks! Signed-off-by: MengqingCao --- docs/source/user_guide/supported_models.md | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/docs/source/user_guide/supported_models.md b/docs/source/user_guide/supported_models.md index fb7a62c..10ef6f0 100644 --- a/docs/source/user_guide/supported_models.md +++ b/docs/source/user_guide/supported_models.md @@ -12,19 +12,23 @@ | QwQ-32B | ✅ || | MiniCPM |✅| | | LLama3.1/3.2 | ✅ || -| Mistral | | Need test | -| DeepSeek v2.5 | |Need test | -| Gemma-2 | |Need test| -| Baichuan | |Need test| | Internlm | ✅ || -| ChatGLM | ❌ | Plan in Q2| +| InternVL2 | ✅ || | InternVL2.5 | ✅ || -| GLM-4v | |Need test| | Molomo | ✅ || -| LLaVA1.5 | ✅ || -| LLaVA 1.6 | ✅ |Modify the default value of `max_position_embeddings` in the weight file `config.json` to `5120` to run LLaVA 1.6 on Ascend NPU| +| LLaVA 1.5 | ✅ || +| LLaVA 1.6 | ✅ |[#553](https://github.com/vllm-project/vllm-ascend/issues/553)| +| Baichuan | ✅ || +| Phi-4-mini | ✅ || +| Gemma-3 | ❌ |[#496](https://github.com/vllm-project/vllm-ascend/issues/496)| +| ChatGLM | ❌ | [#554](https://github.com/vllm-project/vllm-ascend/issues/554)| +| LLama4 | ❌ |[#471](https://github.com/vllm-project/vllm-ascend/issues/471)| | Mllama | |Need test| | LLaVA-Next | |Need test| | LLaVA-Next-Video | |Need test| | Phi-3-Vison/Phi-3.5-Vison | |Need test| | Ultravox | |Need test| +| Mistral | | Need test | +| DeepSeek v2.5 | |Need test | +| Gemma-2 | |Need test| +| GLM-4v | |Need test|