From 75c3f9a7807daa3346685be88e4f06d6a5f362f0 Mon Sep 17 00:00:00 2001 From: herizhen <59841270+herizhen@users.noreply.github.com> Date: Mon, 10 Nov 2025 16:22:52 +0800 Subject: [PATCH] [Typo] LLama has been changed to Llama (#4089) ### What this PR does / why we need it? First-generation model:uses"LLama",subsequent models use"Llama" The second"L"here should be lowercase.Other instances of "LLama"on this page should be corrected accordingly ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? ut - vLLM version: v0.11.0 - vLLM main: https://github.com/vllm-project/vllm/commit/83f478bb19489b41e9d208b47b4bb5a95ac171ac Signed-off-by: herizhen Co-authored-by: herizhen --- docs/source/user_guide/support_matrix/supported_models.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/user_guide/support_matrix/supported_models.md b/docs/source/user_guide/support_matrix/supported_models.md index c5a718b0..c72bcdb8 100644 --- a/docs/source/user_guide/support_matrix/supported_models.md +++ b/docs/source/user_guide/support_matrix/supported_models.md @@ -11,7 +11,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160 | DeepSeek V3/3.1 | ✅ | ||||||||||||||||||| | DeepSeek V3.2 EXP | ✅ | | ✅ | A2/A3 | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ❌ | | | 163840 | | [DeepSeek-V3.2-Exp tutorial](../../tutorials/DeepSeek-V3.2-Exp.md) | | DeepSeek R1 | ✅ | ||||||||||||||||||| -| DeepSeek Distill (Qwen/LLama) | ✅ | ||||||||||||||||||| +| DeepSeek Distill (Qwen/Llama) | ✅ | ||||||||||||||||||| | Qwen3 | ✅ | ||||||||||||||||||| | Qwen3-based | ✅ | ||||||||||||||||||| | Qwen3-Coder | ✅ | ||||||||||||||||||| @@ -21,7 +21,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160 | Qwen2 | ✅ | ||||||||||||||||||| | Qwen2-based | ✅ | ||||||||||||||||||| | QwQ-32B | ✅ | ||||||||||||||||||| -| LLama2/3/3.1 | ✅ | ||||||||||||||||||| +| Llama2/3/3.1 | ✅ | ||||||||||||||||||| | Internlm | ✅ | [#1962](https://github.com/vllm-project/vllm-ascend/issues/1962) ||||||||||||||||||| | Baichuan | ✅ | ||||||||||||||||||| | Baichuan2 | ✅ | ||||||||||||||||||| @@ -73,8 +73,8 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160 | Mistral3 | ✅ | ||||||||||||||||||| | Phi-3-Vison/Phi-3.5-Vison | ✅ | ||||||||||||||||||| | Gemma3 | ✅ | ||||||||||||||||||| -| LLama4 | ❌ | [1972](https://github.com/vllm-project/vllm-ascend/issues/1972) ||||||||||||||||||| -| LLama3.2 | ❌ | [1972](https://github.com/vllm-project/vllm-ascend/issues/1972) ||||||||||||||||||| +| Llama4 | ❌ | [1972](https://github.com/vllm-project/vllm-ascend/issues/1972) ||||||||||||||||||| +| Llama3.2 | ❌ | [1972](https://github.com/vllm-project/vllm-ascend/issues/1972) ||||||||||||||||||| | Keye-VL-8B-Preview | ❌ | [1963](https://github.com/vllm-project/vllm-ascend/issues/1963) ||||||||||||||||||| | Florence-2 | ❌ | [2259](https://github.com/vllm-project/vllm-ascend/issues/2259) ||||||||||||||||||| | GLM-4V | ❌ | [2260](https://github.com/vllm-project/vllm-ascend/issues/2260) |||||||||||||||||||