### What this PR does / why we need it? Update installation and tutorial doc ### Does this PR introduce _any_ user-facing change? NO ### How was this patch tested? preview Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
37 lines
1.1 KiB
Markdown
37 lines
1.1 KiB
Markdown
# Supported Models
|
|
|
|
| Model | Supported | Note |
|
|
|---------|-----------|------|
|
|
| DeepSeek v3 | ✅|||
|
|
| DeepSeek R1 | ✅|||
|
|
| DeepSeek Distill (Qwen/LLama) |✅||
|
|
| Qwen3 | ✅ ||
|
|
| Qwen3-Moe | ✅ ||
|
|
| Qwen2-VL | ✅ ||
|
|
| Qwen2-Audio | ✅ ||
|
|
| Qwen2.5 | ✅ ||
|
|
| Qwen2.5-VL | ✅ ||
|
|
| QwQ-32B | ✅ ||
|
|
| MiniCPM |✅| |
|
|
| LLama3.1/3.2 | ✅ ||
|
|
| Internlm | ✅ ||
|
|
| InternVL2 | ✅ ||
|
|
| InternVL2.5 | ✅ ||
|
|
| Molomo | ✅ ||
|
|
| LLaVA 1.5 | ✅ ||
|
|
| LLaVA 1.6 | ✅ |[#553](https://github.com/vllm-project/vllm-ascend/issues/553)|
|
|
| Baichuan | ✅ ||
|
|
| Phi-4-mini | ✅ ||
|
|
| Gemma-3 | ❌ |[#496](https://github.com/vllm-project/vllm-ascend/issues/496)|
|
|
| ChatGLM | ❌ | [#554](https://github.com/vllm-project/vllm-ascend/issues/554)|
|
|
| LLama4 | ❌ |[#471](https://github.com/vllm-project/vllm-ascend/issues/471)|
|
|
| Mllama | |Need test|
|
|
| LLaVA-Next | |Need test|
|
|
| LLaVA-Next-Video | |Need test|
|
|
| Phi-3-Vison/Phi-3.5-Vison | |Need test|
|
|
| Ultravox | |Need test|
|
|
| Mistral | | Need test |
|
|
| DeepSeek v2.5 | |Need test |
|
|
| Gemma-2 | |Need test|
|
|
| GLM-4v | |Need test|
|