63 lines
1.8 KiB
Markdown
63 lines
1.8 KiB
Markdown
# enginex-hygon-vllm
|
||
|
||
运行于【海光 DCU】系列算力卡的【文本生成】引擎,基于 vLLM 引擎进行架构特别适配优化,支持 Qwen、DeepSeek、Llama 等最新开源模型。
|
||
|
||
因具体模型之间的启动方式和具体镜像会有略微差别,请详细查看 `/enginex` 目录下各个支持模型的启动测试方式。
|
||
|
||
## 可支持模型列表
|
||
可在项目文件夹 `/enginex` 下查看具体可支持模型文件的运行方式。
|
||
|
||
支持模型列表:
|
||
- jinaai/jina-embeddings-v3
|
||
- deepseek-ai/DeepSeek-R1
|
||
- Qwen/QwQ-32B
|
||
- deepseek-ai/DeepSeek-V3
|
||
- deepseek-ai/DeepSeek-V3.1
|
||
- LLaMA_Fastchat_pytorch
|
||
- Qwen/Qwen3-30B-A3B
|
||
- Qwen-7B_fastllm
|
||
- ChatGLM-6B_fastllm
|
||
- ZhipuAI/ChatGLM-6B
|
||
- Shanghai_AI_Laboratory/internlm-chat-7b
|
||
- ZhipuAI/glm-4v-9b
|
||
- ZhipuAI/GLM-4-9B-0414
|
||
- deepseek-ai/DeepSeek-Coder-V2-Base
|
||
- openai-community/gpt2
|
||
- ZhipuAI/chatglm2-6b
|
||
- Qwen/Qwen-7B-Chat
|
||
- baichuan-inc/Baichuan2-13B-Chat
|
||
- ZhipuAI/chatglm3-6b
|
||
- deepseek-ai/DeepSeek-V2
|
||
- Qwen/Qwen2.5-Omni-7B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
|
||
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
|
||
- LLM-Research/Meta-Llama-3-8B-Instruct
|
||
- Qwen/Qwen1.5-14B-Chat
|
||
- Qwen/Qwen2-7B
|
||
- Qwen/Qwen3-Embedding-0.6B
|
||
- baichuan-inc/baichuan-7B
|
||
- openai-community/gpt2
|
||
- gaodema/GME-Qwen2-VL
|
||
- OpenBMB/MiniCPM3-4B
|
||
- ZhipuAI/glm-10b-chinese
|
||
- 01ai/Yi-6B-Chat
|
||
- 01ai/Yi-34B-Chat
|
||
- ZhipuAI/glm-4-9b-chat
|
||
- Qwen/Qwen2.5-Coder-0.5B-Instruct
|
||
- Qwen/Qwen2.5-Coder-1.5B-Instruct
|
||
- Qwen/Qwen2.5-Coder-3B-Instruct
|
||
- Qwen/Qwen2.5-Coder-7B-Instruct
|
||
- Qwen/Qwen2.5-Coder-14B-Instruct
|
||
- Qwen/Qwen2.5-Coder-0.5B
|
||
- Qwen/Qwen2.5-Coder-1.5B
|
||
- Qwen/Qwen2.5-Coder-3B
|
||
- Qwen/Qwen2.5-Coder-7B
|
||
- Qwen/Qwen2.5-Coder-14B
|
||
- Qwen/Qwen2.5-Coder-32B
|
||
|
||
|