# SOME DESCRIPTIVE TITLE. # Copyright (C) 2025, vllm-ascend team # This file is distributed under the same license as the vllm-ascend # package. # FIRST AUTHOR , 2025. # msgid "" msgstr "" "Project-Id-Version: vllm-ascend\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2026-04-14 09:08+0000\n" "PO-Revision-Date: 2025-07-18 10:09+0800\n" "Last-Translator: \n" "Language: zh_CN\n" "Language-Team: zh_CN \n" "Plural-Forms: nplurals=1; plural=0;\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.18.0\n" #: ../../source/quick_start.md:1 msgid "Quickstart" msgstr "快速入门" #: ../../source/quick_start.md:3 msgid "Prerequisites" msgstr "先决条件" #: ../../source/quick_start.md:5 msgid "Supported Devices" msgstr "支持的设备" #: ../../source/quick_start.md:7 msgid "" "Atlas A2 training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 " "Box16, Atlas 300T A2)" msgstr "" "Atlas A2 训练系列(Atlas 800T A2、Atlas 900 A2 PoD、Atlas 200T A2 Box16、Atlas " "300T A2)" #: ../../source/quick_start.md:8 msgid "Atlas 800I A2 inference series (Atlas 800I A2)" msgstr "Atlas 800I A2 推理系列(Atlas 800I A2)" #: ../../source/quick_start.md:9 msgid "" "Atlas A3 training series (Atlas 800T A3, Atlas 900 A3 SuperPoD, Atlas " "9000 A3 SuperPoD)" msgstr "" "Atlas A3 训练系列(Atlas 800T A3、Atlas 900 A3 SuperPoD、Atlas 9000 A3 SuperPoD)" #: ../../source/quick_start.md:10 msgid "Atlas 800I A3 inference series (Atlas 800I A3)" msgstr "Atlas 800I A3 推理系列(Atlas 800I A3)" #: ../../source/quick_start.md:11 msgid "[Experimental] Atlas 300I inference series (Atlas 300I Duo)" msgstr "[实验性] Atlas 300I 推理系列(Atlas 300I Duo)" #: ../../source/quick_start.md:13 msgid "Setup environment using container" msgstr "使用容器设置环境" #: ../../source/quick_start.md msgid "Ubuntu" msgstr "Ubuntu" #: ../../source/quick_start.md msgid "openEuler" msgstr "openEuler" #: ../../source/quick_start.md:85 msgid "" "The default workdir is `/workspace`, vLLM and vLLM Ascend code are placed" " in `/vllm-workspace` and installed in [development " "mode](https://setuptools.pypa.io/en/latest/userguide/development_mode.html)" " (`pip install -e`) to help developers make changes effective immediately" " without requiring a new installation." msgstr "" "默认工作目录为 `/workspace`,vLLM 和 vLLM Ascend 代码位于 `/vllm-workspace` 目录下,并以[开发模式](https://setuptools.pypa.io/en/latest/userguide/development_mode.html)(`pip install -e`)安装,以便开发者能够即时生效更改,而无需重新安装。" #: ../../source/quick_start.md:87 msgid "Usage" msgstr "用法" #: ../../source/quick_start.md:89 msgid "You can use ModelScope mirror to speed up download:" msgstr "您可以使用 ModelScope 镜像来加速下载:" #: ../../source/quick_start.md:97 msgid "There are two ways to start vLLM on Ascend NPU:" msgstr "在昇腾 NPU 上启动 vLLM 有两种方式:" #: ../../source/quick_start.md msgid "Offline Batched Inference" msgstr "离线批量推理" #: ../../source/quick_start.md:103 msgid "" "With vLLM installed, you can start generating texts for list of input " "prompts (i.e. offline batch inference)." msgstr "安装 vLLM 后,您可以开始为一系列输入提示生成文本(即离线批量推理)。" #: ../../source/quick_start.md:105 msgid "" "Try to run below Python script directly or use `python3` shell to " "generate texts:" msgstr "尝试直接运行下面的 Python 脚本,或者使用 `python3` 交互式环境来生成文本:" #: ../../source/quick_start.md msgid "OpenAI Completions API" msgstr "OpenAI Completions API" #: ../../source/quick_start.md:132 msgid "" "vLLM can also be deployed as a server that implements the OpenAI API " "protocol. Run the following command to start the vLLM server with the " "[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model:" msgstr "" "vLLM 也可以部署为实现 OpenAI API 协议的服务器。运行以下命令,使用 [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) 模型启动 vLLM 服务器:" #: ../../source/quick_start.md:143 msgid "If you see a log as below:" msgstr "如果您看到如下日志:" #: ../../source/quick_start.md:152 msgid "Congratulations, you have successfully started the vLLM server!" msgstr "恭喜,您已成功启动 vLLM 服务器!" #: ../../source/quick_start.md:154 msgid "You can query the list of models:" msgstr "您可以查询模型列表:" #: ../../source/quick_start.md:162 msgid "You can also query the model with input prompts:" msgstr "您也可以通过输入提示来查询模型:" #: ../../source/quick_start.md:177 msgid "" "vLLM is serving as a background process, you can use `kill -2 $VLLM_PID` " "to stop the background process gracefully, which is similar to `Ctrl-C` " "for stopping the foreground vLLM process:" msgstr "" "vLLM 正作为后台进程运行,您可以使用 `kill -2 $VLLM_PID` 来优雅地停止后台进程,这类似于使用 `Ctrl-C` 停止前台 vLLM 进程:" #: ../../source/quick_start.md:186 msgid "The output is as below:" msgstr "输出如下:" #: ../../source/quick_start.md:195 msgid "Finally, you can exit the container by using `ctrl-D`." msgstr "最后,您可以通过按 `ctrl-D` 退出容器。"