提交vllm0.11.0开发分支

This commit is contained in:
chenyili
2025-12-10 17:51:24 +08:00
parent deab7dd0b6
commit 7c22d621fb
175 changed files with 31856 additions and 8683 deletions

View File

@@ -0,0 +1,177 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/contribution/index.md:1
msgid "Contributing"
msgstr "贡献"
#: ../../source/developer_guide/contribution/index.md:3
#, fuzzy
msgid "Building and Testing"
msgstr "构建与测试"
#~ msgid "Index"
#~ msgstr "索引"
#~ msgid ""
#~ "It's recommended to set up a local"
#~ " development environment to build and "
#~ "test before you submit a PR."
#~ msgstr "建议先搭建本地开发环境来进行构建和测试,再提交 PR。"
#~ msgid "Setup development environment"
#~ msgstr "搭建开发环境"
#~ msgid ""
#~ "Theoretically, the vllm-kunlun build is"
#~ " only supported on Linux because "
#~ "`vllm-kunlun` dependency `torch_npu` only "
#~ "supports Linux."
#~ msgstr ""
#~ "理论上vllm-kunlun 构建仅支持 Linux因为 `vllm-"
#~ "kunlun` 的依赖项 `torch_npu` 只支持 Linux。"
#~ msgid ""
#~ "But you can still set up dev "
#~ "env on Linux/Windows/macOS for linting "
#~ "and basic test as following commands:"
#~ msgstr "但你仍然可以在 Linux/Windows/macOS 上按照以下命令设置开发环境,用于代码规约检查和基本测试:"
#~ msgid "Run lint locally"
#~ msgstr "在本地运行 lint"
#~ msgid "Run CI locally"
#~ msgstr "本地运行CI"
#~ msgid "After complete \"Run lint\" setup, you can run CI locally:"
#~ msgstr "在完成“运行 lint”设置后你可以在本地运行 CI"
#~ msgid "Submit the commit"
#~ msgstr "提交该提交"
#~ msgid ""
#~ "🎉 Congratulations! You have completed "
#~ "the development environment setup."
#~ msgstr "🎉 恭喜!你已经完成了开发环境的搭建。"
#~ msgid "Test locally"
#~ msgstr "本地测试"
#~ msgid ""
#~ "You can refer to [Testing](./testing.md) "
#~ "doc to help you setup testing "
#~ "environment and running tests locally."
#~ msgstr "你可以参考 [测试](./testing.md) 文档,帮助你搭建测试环境并在本地运行测试。"
#~ msgid "DCO and Signed-off-by"
#~ msgstr "DCO 和签名确认"
#~ msgid ""
#~ "When contributing changes to this "
#~ "project, you must agree to the "
#~ "DCO. Commits must include a `Signed-"
#~ "off-by:` header which certifies "
#~ "agreement with the terms of the "
#~ "DCO."
#~ msgstr "当为本项目贡献更改时,您必须同意 DCO。提交必须包含 `Signed-off-by:` 头部,以证明您同意 DCO 的条款。"
#~ msgid "Using `-s` with `git commit` will automatically add this header."
#~ msgstr "在使用 `git commit` 时加上 `-s` 参数会自动添加这个头部信息。"
#~ msgid "PR Title and Classification"
#~ msgstr "PR 标题与分类"
#~ msgid ""
#~ "Only specific types of PRs will be"
#~ " reviewed. The PR title is prefixed"
#~ " appropriately to indicate the type "
#~ "of change. Please use one of the"
#~ " following:"
#~ msgstr "只有特定类型的 PR 会被审核。PR 标题应使用合适的前缀以指明更改类型。请使用以下之一:"
#~ msgid "`[Attention]` for new features or optimization in attention."
#~ msgstr "`[Attention]` 用于注意力机制中新特性或优化。"
#~ msgid "`[Communicator]` for new features or optimization in communicators."
#~ msgstr "`[Communicator]` 适用于通信器中的新特性或优化。"
#~ msgid "`[ModelRunner]` for new features or optimization in model runner."
#~ msgstr "`[ModelRunner]` 用于模型运行器中的新功能或优化。"
#~ msgid "`[Platform]` for new features or optimization in platform."
#~ msgstr "`[Platform]` 用于平台中新功能或优化。"
#~ msgid "`[Worker]` for new features or optimization in worker."
#~ msgstr "`[Worker]` 用于 worker 的新功能或优化。"
#~ msgid ""
#~ "`[Core]` for new features or "
#~ "optimization in the core vllm-kunlun"
#~ " logic (such as platform, attention, "
#~ "communicators, model runner)"
#~ msgstr "`[Core]` 用于核心 vllm-kunlun 逻辑中的新特性或优化(例如平台、注意力机制、通信器、模型运行器)。"
#~ msgid "`[Kernel]` changes affecting compute kernels and ops."
#~ msgstr "`[Kernel]` 影响计算内核和操作的更改。"
#~ msgid "`[Bugfix]` for bug fixes."
#~ msgstr "`[Bugfix]` 用于表示错误修复。"
#~ msgid "`[Doc]` for documentation fixes and improvements."
#~ msgstr "`[Doc]` 用于文档修复和改进。"
#~ msgid "`[Test]` for tests (such as unit tests)."
#~ msgstr "`[Test]` 用于测试(如单元测试)。"
#~ msgid "`[CI]` for build or continuous integration improvements."
#~ msgstr "`[CI]` 用于构建或持续集成的改进。"
#~ msgid ""
#~ "`[Misc]` for PRs that do not fit"
#~ " the above categories. Please use "
#~ "this sparingly."
#~ msgstr "对于不属于上述类别的 PR请使用 `[Misc]`。请谨慎使用此标签。"
#~ msgid ""
#~ "If the PR spans more than one "
#~ "category, please include all relevant "
#~ "prefixes."
#~ msgstr "如果拉取请求PR涵盖多个类别请包含所有相关的前缀。"
#~ msgid "Others"
#~ msgstr "其他"
#~ msgid ""
#~ "You may find more information about "
#~ "contributing to vLLM Kunlun backend "
#~ "plugin on "
#~ "[<u>docs.vllm.ai</u>](https://docs.vllm.ai/en/latest/contributing/overview.html)."
#~ " If you find any problem when "
#~ "contributing, you can feel free to "
#~ "submit a PR to improve the doc "
#~ "to help other developers."
#~ msgstr ""
#~ "你可以在 "
#~ "[<u>docs.vllm.ai</u>](https://docs.vllm.ai/en/latest/contributing/overview.html)"
#~ " 上找到有关为 vLLM Kunlun "
#~ "后端插件做贡献的更多信息。如果你在贡献过程中遇到任何问题,欢迎随时提交 PR 来改进文档,以帮助其他开发者。"

View File

@@ -0,0 +1,133 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/contribution/multi_node_test.md:1
msgid "Multi Node Test"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:3
msgid ""
"Multi-Node CI is designed to test distributed scenarios of very large "
"models, eg: disaggregated_prefill multi DP across multi nodes and so on."
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:5
msgid "How is works"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:7
msgid ""
"The following picture shows the basic deployment view of the multi-node "
"CI mechanism, It shows how the github action interact with "
"[lws](https://lws.sigs.k8s.io/docs/overview/) (a kind of kubernetes crd "
"resource)"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:9
msgid "![alt text](../../assets/deployment.png)"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:9
#: ../../source/developer_guide/contribution/multi_node_test.md:13
msgid "alt text"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:11
msgid ""
"From the workflow perspective, we can see how the final test script is "
"executed, The key point is that these two [lws.yaml and "
"run.sh](https://github.com/vllm-project/vllm-"
"kunlun/tree/main/tests/e2e/nightly/multi_node/scripts), The former "
"defines how our k8s cluster is pulled up, and the latter defines the "
"entry script when the pod is started, Each node executes different logic "
"according to the "
"[LWS_WORKER_INDEX](https://lws.sigs.k8s.io/docs/reference/labels-"
"annotations-and-environment-variables/) environment variable, so that "
"multiple nodes can form a distributed cluster to perform tasks."
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:13
msgid "![alt text](../../assets/workflow.png)"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:15
msgid "How to contribute"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:17
msgid "Upload custom weights"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:19
msgid ""
"If you need customized weights, for example, you quantized a w8a8 weight "
"for DeepSeek-V3 and you want your weight to run on CI, Uploading weights "
"to ModelScope's [vllm-kunlun](https://www.modelscope.cn/organization"
"/vllm-kunlun) organization is welcome, If you do not have permission to "
"upload, please contact @Potabk"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:21
msgid "Add config yaml"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:23
msgid ""
"As the entrypoint script [run.sh](https://github.com/vllm-project/vllm-"
"kunlun/blob/0bf3f21a987aede366ec4629ad0ffec8e32fe90d/tests/e2e/nightly/multi_node/scripts/run.sh#L106)"
" shows, A k8s pod startup means traversing all *.yaml files in the "
"[directory](https://github.com/vllm-project/vllm-"
"kunlun/tree/main/tests/e2e/nightly/multi_node/config/models), reading and"
" executing according to different configurations, so what we need to do "
"is just add \"yamls\" like [DeepSeek-V3.yaml](https://github.com/vllm-"
"project/vllm-"
"kunlun/blob/main/tests/e2e/nightly/multi_node/config/models/DeepSeek-V3.yaml)."
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:25
msgid ""
"Suppose you have **2 nodes** running a 1P1D setup (1 Prefillers + 1 "
"Decoder):"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:27
msgid "you may add a config file looks like:"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:69
msgid ""
"Add the case to nightly workflow currently, the multi-node test workflow "
"defined in the [vllm_kunlun_test_nightly_a2/a3.yaml](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/.github/workflows/vllm_kunlun_test_nightly_a3.yaml)"
msgstr ""
#: ../../source/developer_guide/contribution/multi_node_test.md:99
msgid ""
"The matrix above defines all the parameters required to add a multi-"
"machine use case, The parameters worth paying attention to (I mean if you"
" are adding a new use case) are size and the path to the yaml "
"configuration file. The former defines the number of nodes required for "
"your use case, and the latter defines the path to the configuration file "
"you have completed in step 2."
msgstr ""

View File

@@ -0,0 +1,265 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/contribution/testing.md:1
msgid "Testing"
msgstr "测试"
#: ../../source/developer_guide/contribution/testing.md:3
#, fuzzy
msgid ""
"This document explains how to write E2E tests and unit tests to verify "
"the implementation of your feature."
msgstr "本节介绍如何编写端到端测试和单元测试,以验证你的功能实现。"
#: ../../source/developer_guide/contribution/testing.md:5
#, fuzzy
msgid "Setup a test environment"
msgstr "设置测试环境"
#: ../../source/developer_guide/contribution/testing.md:7
#, fuzzy
msgid ""
"The fastest way to setup a test environment is to use the main branch's "
"container image:"
msgstr "搭建测试环境最快的方法是使用 main 分支的容器镜像:"
#: ../../source/developer_guide/contribution/testing.md
msgid "Local (CPU)"
msgstr "本地CPU"
#: ../../source/developer_guide/contribution/testing.md:18
#, fuzzy
msgid "You can run the unit tests on CPUs with the following steps:"
msgstr "你可以按照以下步骤在 CPU 上运行单元测试:"
#: ../../source/developer_guide/contribution/testing.md
msgid "Single card"
msgstr "单张卡片"
#: ../../source/developer_guide/contribution/testing.md:86
#: ../../source/developer_guide/contribution/testing.md:125
msgid "After starting the container, you should install the required packages:"
msgstr "启动容器后,你应该安装所需的软件包:"
#: ../../source/developer_guide/contribution/testing.md
msgid "Multi cards"
msgstr "多卡"
#: ../../source/developer_guide/contribution/testing.md:139
msgid "Running tests"
msgstr "运行测试"
#: ../../source/developer_guide/contribution/testing.md:141
#, fuzzy
msgid "Unit tests"
msgstr "单元测试"
#: ../../source/developer_guide/contribution/testing.md:143
msgid "There are several principles to follow when writing unit tests:"
msgstr "编写单元测试时需要遵循几个原则:"
#: ../../source/developer_guide/contribution/testing.md:145
#, fuzzy
msgid ""
"The test file path should be consistent with the source file and start "
"with the `test_` prefix, such as: `vllm_kunlun/worker/worker_v1.py` --> "
"`tests/ut/worker/test_worker_v1.py`"
msgstr ""
"测试文件的路径应与源文件保持一致,并以 `test_` 前缀开头,例如:`vllm_kunlun/worker/worker_v1.py` -->"
" `tests/ut/worker/test_worker_v1.py`"
#: ../../source/developer_guide/contribution/testing.md:146
#, fuzzy
msgid ""
"The vLLM Kunlun test uses unittest framework. See "
"[here](https://docs.python.org/3/library/unittest.html#module-unittest) "
"to understand how to write unit tests."
msgstr ""
"vLLM Kunlun 测试使用 unittest "
"框架,参见[这里](https://docs.python.org/3/library/unittest.html#module-"
"unittest)了解如何编写单元测试。"
#: ../../source/developer_guide/contribution/testing.md:147
#, fuzzy
msgid ""
"All unit tests can be run on CPUs, so you must mock the device-related "
"function to host."
msgstr "所有单元测试都可以在 CPU 上运行,因此你必须将与设备相关的函数模拟为 host。"
#: ../../source/developer_guide/contribution/testing.md:148
msgid ""
"Example: [tests/ut/test_kunlun_config.py](https://github.com/vllm-project"
"/vllm-kunlun/blob/main/tests/ut/test_kunlun_config.py)."
msgstr ""
"示例:[tests/ut/test_kunlun_config.py](https://github.com/vllm-project/vllm-"
"kunlun/blob/main/tests/ut/test_kunlun_config.py)。"
#: ../../source/developer_guide/contribution/testing.md:149
msgid "You can run the unit tests using `pytest`:"
msgstr "你可以使用 `pytest` 运行单元测试:"
#: ../../source/developer_guide/contribution/testing.md
#, fuzzy
msgid "Single-card"
msgstr "单张卡片"
#: ../../source/developer_guide/contribution/testing.md
#, fuzzy
msgid "Multi-card"
msgstr "多卡"
#: ../../source/developer_guide/contribution/testing.md:196
msgid "E2E test"
msgstr "端到端测试"
#: ../../source/developer_guide/contribution/testing.md:198
#, fuzzy
msgid ""
"Although vllm-kunlun CI provides the [E2E test](https://github.com/vllm-"
"project/vllm-kunlun/blob/main/.github/workflows/vllm_kunlun_test.yaml) on"
" Kunlun CI, you can run it locally."
msgstr ""
"虽然 vllm-kunlun CI 在 Kunlun CI 上提供了 [端到端测试](https://github.com/vllm-"
"project/vllm-"
"kunlun/blob/main/.github/workflows/vllm_kunlun_test.yaml),你也可以在本地运行它。"
#: ../../source/developer_guide/contribution/testing.md:208
#, fuzzy
msgid "You can't run the E2E test on CPUs."
msgstr "你无法在 CPU 上运行 e2e 测试。"
#: ../../source/developer_guide/contribution/testing.md:247
#, fuzzy
msgid ""
"This will reproduce the E2E test. See "
"[vllm_kunlun_test.yaml](https://github.com/vllm-project/vllm-"
"kunlun/blob/main/.github/workflows/vllm_kunlun_test.yaml)."
msgstr ""
"这将复现端到端测试:[vllm_kunlun_test.yaml](https://github.com/vllm-project/vllm-"
"kunlun/blob/main/.github/workflows/vllm_kunlun_test.yaml)。"
#: ../../source/developer_guide/contribution/testing.md:249
msgid "E2E test example:"
msgstr "E2E 测试示例:"
#: ../../source/developer_guide/contribution/testing.md:251
msgid ""
"Offline test example: "
"[`tests/e2e/singlecard/test_offline_inference.py`](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/tests/e2e/singlecard/test_offline_inference.py)"
msgstr ""
"离线测试示例:[`tests/e2e/singlecard/test_offline_inference.py`](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/tests/e2e/singlecard/test_offline_inference.py)"
#: ../../source/developer_guide/contribution/testing.md:252
msgid ""
"Online test examples: "
"[`tests/e2e/singlecard/test_prompt_embedding.py`](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/tests/e2e/singlecard/test_prompt_embedding.py)"
msgstr ""
"在线测试示例:[`tests/e2e/singlecard/test_prompt_embedding.py`](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/tests/e2e/singlecard/test_prompt_embedding.py)"
#: ../../source/developer_guide/contribution/testing.md:253
msgid ""
"Correctness test example: "
"[`tests/e2e/singlecard/test_aclgraph.py`](https://github.com/vllm-project"
"/vllm-kunlun/blob/main/tests/e2e/singlecard/test_aclgraph.py)"
msgstr ""
"正确性测试示例:[`tests/e2e/singlecard/test_aclgraph.py`](https://github.com"
"/vllm-project/vllm-"
"kunlun/blob/main/tests/e2e/singlecard/test_aclgraph.py)"
#: ../../source/developer_guide/contribution/testing.md:254
msgid ""
"Reduced Layer model test example: [test_torchair_graph_mode.py - "
"DeepSeek-V3-Pruning](https://github.com/vllm-project/vllm-"
"kunlun/blob/20767a043cccb3764214930d4695e53941de87ec/tests/e2e/multicard/test_torchair_graph_mode.py#L48)"
msgstr ""
"简化层模型测试示例:[test_torchair_graph_mode.py - "
"DeepSeek-V3-Pruning](https://github.com/vllm-project/vllm-"
"kunlun/blob/20767a043cccb3764214930d4695e53941de87ec/tests/e2e/multicard/test_torchair_graph_mode.py#L48)"
#: ../../source/developer_guide/contribution/testing.md:256
#, fuzzy
msgid ""
"The CI resource is limited, and you might need to reduce the number of "
"layers of a model. Below is an example of how to generate a reduced layer"
" model:"
msgstr "CI 资源有限,您可能需要减少模型的层数,下面是一个生成减少层数模型的示例:"
#: ../../source/developer_guide/contribution/testing.md:257
#, fuzzy
msgid ""
"Fork the original model repo in modelscope. All the files in the repo "
"except for weights are required."
msgstr "在 modelscope 中 fork 原始模型仓库,我们需要仓库中的所有文件,除了权重文件。"
#: ../../source/developer_guide/contribution/testing.md:258
#, python-brace-format
msgid ""
"Set `num_hidden_layers` to the expected number of layers, e.g., "
"`{\"num_hidden_layers\": 2,}`"
msgstr "将 `num_hidden_layers` 设置为期望的层数,例如 `{\"num_hidden_layers\": 2,}`"
#: ../../source/developer_guide/contribution/testing.md:259
msgid ""
"Copy the following python script as `generate_random_weight.py`. Set the "
"relevant parameters `MODEL_LOCAL_PATH`, `DIST_DTYPE` and "
"`DIST_MODEL_PATH` as needed:"
msgstr ""
"将以下 Python 脚本复制为 `generate_random_weight.py`。根据需要设置相关参数 "
"`MODEL_LOCAL_PATH`、`DIST_DTYPE` 和 `DIST_MODEL_PATH`"
#: ../../source/developer_guide/contribution/testing.md:277
msgid "Run doctest"
msgstr "运行 doctest"
#: ../../source/developer_guide/contribution/testing.md:279
#, fuzzy
msgid ""
"vllm-kunlun provides a `vllm-kunlun/tests/e2e/run_doctests.sh` command to"
" run all doctests in the doc files. The doctest is a good way to make "
"sure docs stay current and examples remain executable, which can be run "
"locally as follows:"
msgstr ""
"vllm-kunlun 提供了一个 `vllm-kunlun/tests/e2e/run_doctests.sh` 命令,用于运行文档文件中的所有"
" doctest。doctest 是确保文档保持最新且示例可执行的好方法,你可以按照以下方式在本地运行它:"
#: ../../source/developer_guide/contribution/testing.md:287
#, fuzzy
msgid ""
"This will reproduce the same environment as the CI. See "
"[vllm_kunlun_doctest.yaml](https://github.com/vllm-project/vllm-"
"kunlun/blob/main/.github/workflows/vllm_kunlun_doctest.yaml)."
msgstr ""
"这将复现与 CI 相同的环境:[vllm_kunlun_doctest.yaml](https://github.com/vllm-project"
"/vllm-kunlun/blob/main/.github/workflows/vllm_kunlun_doctest.yaml)。"
#~ msgid "Multi cards test"
#~ msgstr "多卡测试"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/accuracy_report/DeepSeek-V2-Lite.md:1
msgid "deepseek-ai/DeepSeek-V2-Lite"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/accuracy_report/Qwen2.5-VL-7B-Instruct.md:1
msgid "Qwen/Qwen2.5-VL-7B-Instruct"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/accuracy_report/Qwen3-30B-A3B.md:1
msgid "Qwen/Qwen3-30B-A3B"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/accuracy_report/Qwen3-8B-Base.md:1
msgid "Qwen/Qwen3-8B-Base"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/evaluation/accuracy_report/index.md:1
#: ../../developer_guide/evaluation/accuracy_report/index.md:3
msgid "Accuracy Report"
msgstr "准确性报告"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/evaluation/index.md:1
#: ../../developer_guide/evaluation/index.md:3
msgid "Accuracy"
msgstr "准确性"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/using_ais_bench.md:1
msgid "Using AISBench"
msgstr ""

View File

@@ -0,0 +1,100 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/using_evalscope.md:1
msgid "Using EvalScope"
msgstr "使用 EvalScope"
#~ msgid ""
#~ "This document will guide you have "
#~ "model inference stress testing and "
#~ "accuracy testing using "
#~ "[EvalScope](https://github.com/modelscope/evalscope)."
#~ msgstr ""
#~ "本文档将指导您如何使用 [EvalScope](https://github.com/modelscope/evalscope)"
#~ " 进行模型推理压力测试和精度测试。"
#~ msgid "1. Online serving"
#~ msgstr "1. 在线服务"
#~ msgid "You can run docker container to start the vLLM server on a single XPU:"
#~ msgstr "你可以运行 docker 容器,在单个 XPU 上启动 vLLM 服务器:"
#~ msgid "If your service start successfully, you can see the info shown below:"
#~ msgstr "如果你的服务启动成功,你会看到如下所示的信息:"
#~ msgid ""
#~ "Once your server is started, you "
#~ "can query the model with input "
#~ "prompts in new terminal:"
#~ msgstr "一旦你的服务器启动后,你可以在新的终端中用输入提示词查询模型:"
#~ msgid "2. Install EvalScope using pip"
#~ msgstr "2. 使用 pip 安装 EvalScope"
#~ msgid "You can install EvalScope by using:"
#~ msgstr "你可以使用以下方式安装 EvalScope"
#~ msgid "3. Run gsm8k accuracy test using EvalScope"
#~ msgstr "3. 使用 EvalScope 运行 gsm8k 准确率测试"
#~ msgid "You can `evalscope eval` run gsm8k accuracy test:"
#~ msgstr "你可以使用 `evalscope eval` 运行 gsm8k 准确率测试:"
#~ msgid "After 1-2 mins, the output is as shown below:"
#~ msgstr "1-2 分钟后,输出如下所示:"
#~ msgid ""
#~ "See more detail in: [EvalScope doc "
#~ "- Model API Service "
#~ "Evaluation](https://evalscope.readthedocs.io/en/latest/get_started/basic_usage.html"
#~ "#model-api-service-evaluation)."
#~ msgstr ""
#~ "更多详情请见:[EvalScope 文档 - 模型 API "
#~ "服务评测](https://evalscope.readthedocs.io/en/latest/get_started/basic_usage.html"
#~ "#model-api-service-evaluation)。"
#~ msgid "4. Run model inference stress testing using EvalScope"
#~ msgstr "4. 使用 EvalScope 运行模型推理压力测试"
#~ msgid "Install EvalScope[perf] using pip"
#~ msgstr "使用 pip 安装 EvalScope[perf]"
#~ msgid "Basic usage"
#~ msgstr "基本用法"
#~ msgid "You can use `evalscope perf` run perf test:"
#~ msgstr "你可以使用 `evalscope perf` 运行性能测试:"
#~ msgid "Output results"
#~ msgstr "输出结果"
#~ msgid ""
#~ "See more detail in: [EvalScope doc "
#~ "- Model Inference Stress "
#~ "Testing](https://evalscope.readthedocs.io/en/latest/user_guides/stress_test/quick_start.html"
#~ "#basic-usage)."
#~ msgstr ""
#~ "更多详情见:[EvalScope 文档 - "
#~ "模型推理压力测试](https://evalscope.readthedocs.io/en/latest/user_guides/stress_test/quick_start.html"
#~ "#basic-usage)。"

View File

@@ -0,0 +1,62 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/using_lm_eval.md:1
msgid "Using lm-eval"
msgstr "使用 lm-eval"
#~ msgid ""
#~ "This document will guide you have "
#~ "a accuracy testing using [lm-"
#~ "eval](https://github.com/EleutherAI/lm-evaluation-"
#~ "harness)."
#~ msgstr ""
#~ "本文将指导你如何使用 [lm-eval](https://github.com/EleutherAI/lm-"
#~ "evaluation-harness) 进行准确率测试。"
#~ msgid "1. Run docker container"
#~ msgstr "1. 运行 docker 容器"
#~ msgid "You can run docker container on a single XPU:"
#~ msgstr "你可以在单个XPU上运行docker容器"
#~ msgid "2. Run ceval accuracy test using lm-eval"
#~ msgstr "2. 使用 lm-eval 运行 ceval 准确性测试"
#~ msgid "Install lm-eval in the container."
#~ msgstr "在容器中安装 lm-eval。"
#~ msgid "Run the following command:"
#~ msgstr "运行以下命令:"
#~ msgid "After 1-2 mins, the output is as shown below:"
#~ msgstr "1-2 分钟后,输出如下所示:"
#~ msgid ""
#~ "You can see more usage on [Lm-"
#~ "eval Docs](https://github.com/EleutherAI/lm-evaluation-"
#~ "harness/blob/main/docs/README.md)."
#~ msgstr ""
#~ "你可以在 [Lm-eval 文档](https://github.com/EleutherAI"
#~ "/lm-evaluation-harness/blob/main/docs/README.md) "
#~ "上查看更多用法。"

View File

@@ -0,0 +1,77 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/evaluation/using_opencompass.md:1
msgid "Using OpenCompass"
msgstr "使用 OpenCompass"
#~ msgid ""
#~ "This document will guide you have "
#~ "a accuracy testing using "
#~ "[OpenCompass](https://github.com/open-compass/opencompass)."
#~ msgstr ""
#~ "本文档将指导你如何使用 [OpenCompass](https://github.com/open-"
#~ "compass/opencompass) 进行准确率测试。"
#~ msgid "1. Online Serving"
#~ msgstr "1. 在线服务"
#~ msgid "You can run docker container to start the vLLM server on a single XPU:"
#~ msgstr "你可以运行 docker 容器,在单个 XPU 上启动 vLLM 服务器:"
#~ msgid "If your service start successfully, you can see the info shown below:"
#~ msgstr "如果你的服务启动成功,你会看到如下所示的信息:"
#~ msgid ""
#~ "Once your server is started, you "
#~ "can query the model with input "
#~ "prompts in new terminal:"
#~ msgstr "一旦你的服务器启动后,你可以在新的终端中用输入提示词查询模型:"
#~ msgid "2. Run ceval accuracy test using OpenCompass"
#~ msgstr "2. 使用 OpenCompass 运行 ceval 准确率测试"
#~ msgid ""
#~ "Install OpenCompass and configure the "
#~ "environment variables in the container."
#~ msgstr "在容器中安装 OpenCompass 并配置环境变量。"
#~ msgid ""
#~ "Add `opencompass/configs/eval_vllm_kunlun_demo.py` with"
#~ " the following content:"
#~ msgstr "添加 `opencompass/configs/eval_vllm_kunlun_demo.py`,内容如下:"
#~ msgid "Run the following command:"
#~ msgstr "运行以下命令:"
#~ msgid "After 1-2 mins, the output is as shown below:"
#~ msgstr "1-2 分钟后,输出如下所示:"
#~ msgid ""
#~ "You can see more usage on "
#~ "[OpenCompass "
#~ "Docs](https://opencompass.readthedocs.io/en/latest/index.html)."
#~ msgstr ""
#~ "你可以在 [OpenCompass "
#~ "文档](https://opencompass.readthedocs.io/en/latest/index.html) "
#~ "查看更多用法。"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/ACL_Graph.md:1
msgid "Graph"
msgstr ""

View File

@@ -0,0 +1,30 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/KV_Cache_Pool_Guide.md:1
msgid "KV Cache Pool"
msgstr ""
#: ../../source/developer_guide/feature_guide/KV_Cache_Pool_Guide.md:3
msgid "Why KV Cache Pool?"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/ModelRunner_prepare_inputs.md:1
msgid "Prepare inputs for model forwarding"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/Multi_Token_Prediction.md:1
msgid "Multi Token Prediction (MTP)"
msgstr ""

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/eplb_swift_balancer.md:1
msgid "Expert Parallelism Load Balancer (EPLB)"
msgstr ""

View File

@@ -0,0 +1,33 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/feature_guide/index.md:1
#: ../../developer_guide/feature_guide/index.md:5
msgid "Feature Guide"
msgstr "功能指南"
#: ../../developer_guide/feature_guide/index.md:3
msgid ""
"This section provides an overview of the features implemented in vLLM "
"Kunlun. Developers can refer to this guide to understand how vLLM Kunlun "
"works."
msgstr "本节概述了 vLLM Kunlun 中实现的功能。开发者可以参考本指南以了解 vLLM Kunlun 的工作原理。"

View File

@@ -0,0 +1,288 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/feature_guide/patch.md:1
#, fuzzy
msgid "Patch in vLLM"
msgstr "在 vLLM Kunlun 中的补丁"
#~ msgid ""
#~ "vLLM Kunlun is a platform plugin "
#~ "for vLLM. Due to the release cycle"
#~ " of vLLM and vLLM Kunlun is "
#~ "different, and the hardware limitation "
#~ "in some case, we need to patch "
#~ "some code in vLLM to make it "
#~ "compatible with vLLM Kunlun."
#~ msgstr ""
#~ "vLLM Kunlun 是 vLLM 的一个平台插件。由于 vLLM "
#~ "和 vLLM Kunlun 的发布周期不同,并且在某些情况下存在硬件限制,我们需要对 "
#~ "vLLM 进行一些代码补丁,以使其能够兼容 vLLM Kunlun。"
#~ msgid ""
#~ "In vLLM Kunlun code, we provide a"
#~ " patch module `vllm_kunlun/patch` to "
#~ "address the change for vLLM."
#~ msgstr "在 vLLM Kunlun 代码中,我们提供了一个补丁模块 `vllm_kunlun/patch` 用于应对 vLLM 的变更。"
#~ msgid "Principle"
#~ msgstr "原理"
#~ msgid ""
#~ "We should keep in mind that Patch"
#~ " is not the best way to make"
#~ " vLLM Kunlun compatible. It's just a"
#~ " temporary solution. The best way is"
#~ " to contribute the change to vLLM "
#~ "to make it compatible with vLLM "
#~ "Kunlun originally. In vLLM Kunlun, we"
#~ " have the basic principle for Patch"
#~ " strategy:"
#~ msgstr ""
#~ "我们需要记住Patch 不是让 vLLM 兼容 Kunlun "
#~ "的最佳方式,这只是一个临时的解决方案。最好的方法是将修改贡献到 vLLM 项目中,从而让 vLLM"
#~ " 原生支持 Kunlun。对于 vLLM Kunlun我们对 Patch "
#~ "策略有一个基本原则:"
#~ msgid "Less is more. Please do not patch unless it's the only way currently."
#~ msgstr "少即是多。请不要打补丁,除非这是目前唯一的方法。"
#~ msgid ""
#~ "Once a patch is added, it's "
#~ "required to describe the future plan "
#~ "for removing the patch."
#~ msgstr "一旦补丁被添加,必须说明将来移除该补丁的计划。"
#~ msgid "Anytime, clean the patch code is welcome."
#~ msgstr "任何时候,欢迎清理补丁代码。"
#~ msgid "How it works"
#~ msgstr "工作原理"
#~ msgid "In `vllm_kunlun/patch`, you can see the code structure as follows:"
#~ msgstr "在 `vllm_kunlun/patch` 目录中,你可以看到如下代码结构:"
#~ msgid ""
#~ "**platform**: The patch code in this "
#~ "directory is for patching the code "
#~ "in vLLM main process. It's called "
#~ "by `vllm_kunlun/platform::XPUPlatform::pre_register_and_update`"
#~ " very early when vLLM is initialized."
#~ msgstr ""
#~ "**platform**:此目录下的补丁代码用于修补 vLLM 主进程中的代码。当 vLLM "
#~ "初始化时,会在很早的阶段由 "
#~ "`vllm_kunlun/platform::XPUPlatform::pre_register_and_update` 调用。"
#~ msgid ""
#~ "For online mode, vLLM process calls "
#~ "the platform patch here "
#~ "`vllm/vllm/engine/arg_utils.py::AsyncEngineArgs.add_cli_args` "
#~ "when parsing the cli args."
#~ msgstr ""
#~ "对于在线模式vLLM 进程在解析命令行参数时,会在 "
#~ "`vllm/vllm/engine/arg_utils.py::AsyncEngineArgs.add_cli_args` "
#~ "这里调用平台补丁。"
#~ msgid ""
#~ "For offline mode, vLLM process calls "
#~ "the platform patch here "
#~ "`vllm/vllm/engine/arg_utils.py::EngineArgs.create_engine_config` "
#~ "when parsing the input parameters."
#~ msgstr ""
#~ "对于离线模式vLLM 进程在解析输入参数时,会在此处调用平台补丁 "
#~ "`vllm/vllm/engine/arg_utils.py::EngineArgs.create_engine_config`。"
#~ msgid ""
#~ "**worker**: The patch code in this "
#~ "directory is for patching the code "
#~ "in vLLM worker process. It's called "
#~ "by `vllm_kunlun/worker/worker_v1::XPUWorker::__init__` "
#~ "when the vLLM worker process is "
#~ "initialized."
#~ msgstr ""
#~ "**worker**:此目录中的补丁代码用于修补 vLLM worker 进程中的代码。在初始化 "
#~ "vLLM worker 进程时,会被 "
#~ "`vllm_kunlun/worker/worker_v1::XPUWorker::__init__` 调用。"
#~ msgid ""
#~ "For both online and offline mode, "
#~ "vLLM engine core process calls the "
#~ "worker patch here "
#~ "`vllm/vllm/worker/worker_base.py::WorkerWrapperBase.init_worker` "
#~ "when initializing the worker process."
#~ msgstr ""
#~ "无论是在线还是离线模式vLLM 引擎核心进程在初始化 worker 进程时,都会在这里调用 "
#~ "worker "
#~ "补丁:`vllm/vllm/worker/worker_base.py::WorkerWrapperBase.init_worker`。"
#~ msgid ""
#~ "In both **platform** and **worker** "
#~ "folder, there are several patch modules."
#~ " They are used for patching different"
#~ " version of vLLM."
#~ msgstr "在 **platform** 和 **worker** 文件夹中都有一些补丁模块。它们用于修补不同版本的 vLLM。"
#~ msgid ""
#~ "`patch_0_9_2`: This module is used for"
#~ " patching vLLM 0.9.2. The version is"
#~ " always the nearest version of vLLM."
#~ " Once vLLM is released, we will "
#~ "drop this patch module and bump to"
#~ " a new version. For example, "
#~ "`patch_0_9_2` is used for patching vLLM"
#~ " 0.9.2."
#~ msgstr ""
#~ "`patch_0_9_2`:此模块用于修补 vLLM 0.9.2。该版本始终对应于 vLLM "
#~ "的最近版本。一旦 vLLM 发布新版本,我们将移除此补丁模块并升级到新版本。例如,`patch_0_9_2` "
#~ "就是用于修补 vLLM 0.9.2 的。"
#~ msgid ""
#~ "`patch_main`: This module is used for"
#~ " patching the code in vLLM main "
#~ "branch."
#~ msgstr "`patch_main`:该模块用于修补 vLLM 主分支代码。"
#~ msgid ""
#~ "`patch_common`: This module is used for"
#~ " patching both vLLM 0.9.2 and vLLM"
#~ " main branch."
#~ msgstr "`patch_common`:此模块用于同时修补 vLLM 0.9.2 版本和 vLLM 主分支。"
#~ msgid "How to write a patch"
#~ msgstr "如何撰写补丁"
#~ msgid ""
#~ "Before writing a patch, following the"
#~ " principle above, we should patch the"
#~ " least code. If it's necessary, we"
#~ " can patch the code in either "
#~ "**platform** and **worker** folder. Here "
#~ "is an example to patch `distributed` "
#~ "module in vLLM."
#~ msgstr ""
#~ "在编写补丁之前,遵循上述原则,我们应尽量修改最少的代码。如果有必要,我们可以修改 **platform** 和"
#~ " **worker** 文件夹中的代码。下面是一个在 vLLM 中修改 "
#~ "`distributed` 模块的示例。"
#~ msgid ""
#~ "Decide which version of vLLM we "
#~ "should patch. For example, after "
#~ "analysis, here we want to patch "
#~ "both 0.9.2 and main of vLLM."
#~ msgstr "决定我们应该修补哪个版本的 vLLM。例如经过分析后这里我们想要同时修补 vLLM 的 0.9.2 版和主分支main。"
#~ msgid ""
#~ "Decide which process we should patch."
#~ " For example, here `distributed` belongs"
#~ " to the vLLM main process, so "
#~ "we should patch `platform`."
#~ msgstr "决定我们应该修补哪个进程。例如,这里 `distributed` 属于 vLLM 主进程,所以我们应该修补 `platform`。"
#~ msgid ""
#~ "Create the patch file in the right"
#~ " folder. The file should be named "
#~ "as `patch_{module_name}.py`. The example here"
#~ " is "
#~ "`vllm_kunlun/patch/platform/patch_common/patch_distributed.py`."
#~ msgstr ""
#~ "在正确的文件夹中创建补丁文件。文件应命名为 `patch_{module_name}.py`。此处的示例是 "
#~ "`vllm_kunlun/patch/platform/patch_common/patch_distributed.py`。"
#~ msgid "Write your patch code in the new file. Here is an example:"
#~ msgstr "在新文件中编写你的补丁代码。以下是一个示例:"
#~ msgid ""
#~ "Import the patch file in `__init__.py`."
#~ " In this example, add `import "
#~ "vllm_kunlun.patch.platform.patch_common.patch_distributed` into"
#~ " `vllm_kunlun/patch/platform/patch_common/__init__.py`."
#~ msgstr ""
#~ "在 `__init__.py` 中导入补丁文件。在这个示例中,将 `import "
#~ "vllm_kunlun.patch.platform.patch_common.patch_distributed` 添加到"
#~ " `vllm_kunlun/patch/platform/patch_common/__init__.py` 中。"
#~ msgid ""
#~ "Add the description of the patch "
#~ "in `vllm_kunlun/patch/__init__.py`. The description"
#~ " format is as follows:"
#~ msgstr "在 `vllm_kunlun/patch/__init__.py` 中添加补丁的描述。描述格式如下:"
#~ msgid ""
#~ "Add the Unit Test and E2E Test."
#~ " Any newly added code in vLLM "
#~ "Kunlun should contain the Unit Test "
#~ "and E2E Test as well. You can "
#~ "find more details in [test "
#~ "guide](../contribution/testing.md)"
#~ msgstr ""
#~ "添加单元测试和端到端E2E测试。在 vLLM Kunlun "
#~ "中新增的任何代码也应包含单元测试和端到端测试。更多详情请参见 "
#~ "[测试指南](../contribution/testing.md)。"
#~ msgid "Limitation"
#~ msgstr "限制"
#~ msgid ""
#~ "In V1 Engine, vLLM starts three "
#~ "kinds of process: Main process, "
#~ "EngineCore process and Worker process. "
#~ "Now vLLM Kunlun only support patch "
#~ "the code in Main process and "
#~ "Worker process by default. If you "
#~ "want to patch the code runs in "
#~ "EngineCore process, you should patch "
#~ "EngineCore process entirely during setup, "
#~ "the entry code is here "
#~ "`vllm.v1.engine.core`. Please override "
#~ "`EngineCoreProc` and `DPEngineCoreProc` entirely."
#~ msgstr ""
#~ "在 V1 引擎中vLLM 会启动三种类型的进程主进程、EngineCore 进程和"
#~ " Worker 进程。现在 vLLM Kunlun 默认只支持在主进程和 "
#~ "Worker 进程中打补丁代码。如果你想要在 EngineCore 进程中打补丁,你需要在设置阶段对"
#~ " EngineCore 进程整体打补丁,入口代码在 `vllm.v1.engine.core`。请完全重写"
#~ " `EngineCoreProc` 和 `DPEngineCoreProc`。"
#~ msgid ""
#~ "If you are running an edited vLLM"
#~ " code, the version of the vLLM "
#~ "may be changed automatically. For "
#~ "example, if you runs an edited "
#~ "vLLM based on v0.9.n, the version "
#~ "of vLLM may be change to "
#~ "v0.9.nxxx, in this case, the patch "
#~ "for v0.9.n in vLLM Kunlun would "
#~ "not work as expect, because that "
#~ "vLLM Kunlun can't distinguish the "
#~ "version of vLLM you're using. In "
#~ "this case, you can set the "
#~ "environment variable `VLLM_VERSION` to specify"
#~ " the version of vLLM you're using,"
#~ " then the patch for v0.9.2 should "
#~ "work."
#~ msgstr ""
#~ "如果你运行的是经过编辑的 vLLM 代码vLLM 的版本可能会被自动更改。例如,如果你基于 "
#~ "v0.9.n 运行了编辑后的 vLLMvLLM 的版本可能会变为 "
#~ "v0.9.nxxx在这种情况下vLLM Kunlun 的 v0.9.n "
#~ "补丁将无法正常工作,因为 vLLM Kunlun 无法区分你所使用的 vLLM "
#~ "版本。这时,你可以设置环境变量 `VLLM_VERSION` 来指定你所使用的 vLLM "
#~ "版本,这样对 v0.9.2 的补丁就应该可以正常工作。"

View File

@@ -0,0 +1,333 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/modeling/adding_a_new_model.md:1
msgid "Adding a New Model"
msgstr "添加新模型"
#: ../../developer_guide/modeling/adding_a_new_model.md:3
msgid ""
"This guide demonstrates how to integrate a novel or customized model into "
"vllm-kunlun. For foundational concepts, it is highly recommended to refer to"
" [vllm official doc: Adding a New "
"Model](https://docs.vllm.ai/en/stable/contributing/model/) first."
msgstr ""
"本指南演示如何将新颖或自定义的模型集成到 vllm-kunlun 中。对于基础概念,强烈建议先参考 [vllm "
"官方文档:添加新模型](https://docs.vllm.ai/en/stable/contributing/model/)。"
#: ../../developer_guide/modeling/adding_a_new_model.md:6
msgid "Step 1: Implementing Models with `torch` and `torch_npu`"
msgstr "步骤 1使用 `torch` 和 `torch_npu` 实现模型"
#: ../../developer_guide/modeling/adding_a_new_model.md:8
msgid ""
"This section provides instructions for implementing new models compatible "
"with vllm and vllm-kunlun."
msgstr "本节提供了实现与 vllm 和 vllm-kunlun 兼容的新模型的相关说明。"
#: ../../developer_guide/modeling/adding_a_new_model.md:10
msgid "**Before starting:**"
msgstr "**开始之前:**"
#: ../../developer_guide/modeling/adding_a_new_model.md:12
msgid ""
"Verify whether your model already exists in vllm's "
"[models](https://github.com/vllm-"
"project/vllm/tree/main/vllm/model_executor/models) directory."
msgstr ""
"请确认你的模型是否已经存在于 vllm 的 [models](https://github.com/vllm-"
"project/vllm/tree/main/vllm/model_executor/models) 目录中。"
#: ../../developer_guide/modeling/adding_a_new_model.md:13
msgid ""
"Use existing models' implementation as templates to accelerate your "
"development."
msgstr "使用已有模型的实现作为模板以加速您的开发。"
#: ../../developer_guide/modeling/adding_a_new_model.md:15
msgid "Method 1: Implementing New Models from Scratch"
msgstr "方法一:从零开始实现新模型"
#: ../../developer_guide/modeling/adding_a_new_model.md:17
msgid ""
"Follow vllm's [OPT model "
"adaptation](https://docs.vllm.ai/en/stable/contributing/model/basic.html) "
"example for guidance."
msgstr ""
"请参考 vllm 的 [OPT "
"模型适配](https://docs.vllm.ai/en/stable/contributing/model/basic.html) 示例进行操作。"
#: ../../developer_guide/modeling/adding_a_new_model.md:19
msgid "**Key implementation requirements:**"
msgstr "**关键实现要求:**"
#: ../../developer_guide/modeling/adding_a_new_model.md:21
msgid "Place model files in `vllm_kunlun/models/` directory."
msgstr "请将模型文件放在 `vllm_kunlun/models/` 目录下。"
#: ../../developer_guide/modeling/adding_a_new_model.md:23
msgid ""
"Standard module structure for decoder-only LLMs (please checkout vllm's "
"implementations for other kinds of model):"
msgstr "解码器-only LLMs 的标准模块结构(请参考 vllm 对其他类型模型的实现):"
#: ../../developer_guide/modeling/adding_a_new_model.md:25
msgid "`*ModelForCausalLM` (top-level wrapper)"
msgstr "`*ModelForCausalLM`(顶层包装器)"
#: ../../developer_guide/modeling/adding_a_new_model.md:26
msgid "`*Model` (main architecture)"
msgstr "`*Model`(主架构)"
#: ../../developer_guide/modeling/adding_a_new_model.md:27
msgid "`*DecoderLayer` (transformer block)"
msgstr "`*DecoderLayer` transformer 块)"
#: ../../developer_guide/modeling/adding_a_new_model.md:28
msgid "`*Attention` and `*MLP` (specific computation unit)"
msgstr "`*Attention` 和 `*MLP`(特定计算单元)"
#: ../../developer_guide/modeling/adding_a_new_model.md:31
msgid "`*` denotes your model's unique identifier."
msgstr "`*` 表示你的模型的唯一标识符。"
#: ../../developer_guide/modeling/adding_a_new_model.md:34
msgid "Critical Implementation Details:"
msgstr "关键实现细节:"
#: ../../developer_guide/modeling/adding_a_new_model.md:36
msgid "All modules must include a `prefix` argument in `__init__()`."
msgstr "所有模块在 `__init__()` 方法中都必须包含一个 `prefix` 参数。"
#: ../../developer_guide/modeling/adding_a_new_model.md:38
msgid "**Required interfaces:**"
msgstr "**必需的接口:**"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "Module Type"
msgstr "模块类型"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "Required Methods"
msgstr "必需的方法"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "`*ModelForCausalLM`"
msgstr "`*ModelForCausalLM`"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "`get_input_embeddings`, `compute_logits`, `load_weights`"
msgstr "`get_input_embeddings``compute_logits``load_weights`"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "`*Model`"
msgstr "`*模型`"
#: ../../developer_guide/modeling/adding_a_new_model.md:30
msgid "`get_input_embeddings`, `load_weights`"
msgstr "`get_input_embeddings``load_weights`"
#: ../../developer_guide/modeling/adding_a_new_model.md:45
msgid "Attention Backend Integration:"
msgstr "注意后端集成:"
#: ../../developer_guide/modeling/adding_a_new_model.md:47
msgid ""
"Importing attention via `from vllm.attention import Attention` can "
"automatically leverage the attention backend routing of vllm-kunlun (see: "
"`get_attn_backend_cls()` in `vllm_kunlun/platform.py`)."
msgstr ""
"通过 `from vllm.attention import Attention` 导入 attention 可以自动利用 vllm-kunlun "
"的注意力后端路由(详见:`vllm_kunlun/platform.py` 中的 `get_attn_backend_cls()`)。"
#: ../../developer_guide/modeling/adding_a_new_model.md:49
msgid "Tensor Parallelism:"
msgstr "张量并行:"
#: ../../developer_guide/modeling/adding_a_new_model.md:51
msgid ""
"Use vllm's parallel layers (`ColumnParallelLinear`, "
"`VocabParallelEmbedding`, etc.) to implement models supporting tensor "
"parallelism. Note that Kunlun-specific customizations are implemented in "
"`vllm_kunlun/ops/` directory (RMSNorm, VocabParallelEmbedding, etc.)."
msgstr ""
"使用 vllm 的并行层(如 `ColumnParallelLinear`、`VocabParallelEmbedding` "
"等来实现支持张量并行的模型。需要注意的是Kunlun 特有的自定义实现(如 RMSNorm、VocabParallelEmbedding 等)位于 "
"`vllm_kunlun/ops/` 目录下。"
#: ../../developer_guide/modeling/adding_a_new_model.md:53
msgid ""
"**Reference Implementation Template** (assumed path: "
"`vllm_kunlun/models/custom_model.py`):"
msgstr "**参考实现模板**(假定路径:`vllm_kunlun/models/custom_model.py`"
#: ../../developer_guide/modeling/adding_a_new_model.md:135
msgid "Method 2: Customizing Existing vLLM Models"
msgstr "方法二:自定义已有的 vLLM 模型"
#: ../../developer_guide/modeling/adding_a_new_model.md:137
msgid ""
"For most use cases, extending existing implementations is preferable. We "
"demonstrate an example to inherit from base classes and implement a custom "
"deepseek model below (assumed path: `vllm_kunlun/models/deepseek_v2.py`)."
msgstr ""
"对于大多数使用场景,建议扩展已有的实现。我们在下面演示了一个示例,通过继承基类并实现一个自定义的 deepseek "
"模型(假定路径:`vllm_kunlun/models/deepseek_v2.py`)。"
#: ../../developer_guide/modeling/adding_a_new_model.md:175
msgid ""
"For a complete implementation reference, see: "
"`vllm_kunlun/models/deepseek_v2.py`."
msgstr "完整的实现参考请见:`vllm_kunlun/models/deepseek_v2.py`。"
#: ../../developer_guide/modeling/adding_a_new_model.md:178
msgid "Step 2: Registering Custom Models using ModelRegistry Plugins in vLLM"
msgstr "第2步使用 vLLM 中的 ModelRegistry 插件注册自定义模型"
#: ../../developer_guide/modeling/adding_a_new_model.md:180
msgid ""
"vllm provides a plugin mechanism for registering externally implemented "
"models without modifying its codebase."
msgstr "vllm 提供了一种插件机制,可用于注册外部实现的模型,而无需修改其代码库。"
#: ../../developer_guide/modeling/adding_a_new_model.md:182
msgid ""
"To integrate your implemented model from `vllm_kunlun/models/` directory:"
msgstr "要集成你在 `vllm_kunlun/models/` 目录下实现的模型:"
#: ../../developer_guide/modeling/adding_a_new_model.md:184
msgid ""
"Import your model implementation in `vllm_kunlun/models/__init__.py` using "
"relative imports."
msgstr "使用相对导入在 `vllm_kunlun/models/__init__.py` 中导入你的模型实现。"
#: ../../developer_guide/modeling/adding_a_new_model.md:185
msgid ""
"Register the model wrapper class via `vllm.ModelRegistry.register_model()` "
"function."
msgstr "通过 `vllm.ModelRegistry.register_model()` 函数注册模型包装类。"
#: ../../developer_guide/modeling/adding_a_new_model.md:187
msgid ""
"**Reference Registration Template** (an example of registering new models in"
" `vllm_kunlun/models/__init__.py`):"
msgstr "**参考注册模板**(在 `vllm_kunlun/models/__init__.py` 注册新模型的示例):"
#: ../../developer_guide/modeling/adding_a_new_model.md:210
msgid ""
"The first argument of `vllm.ModelRegistry.register_model()` indicates the "
"unique architecture identifier which must match `architectures` in "
"`config.json` of the model."
msgstr ""
"`vllm.ModelRegistry.register_model()` 的第一个参数表示唯一的架构标识符,这个标识符必须与模型的 "
"`config.json` 文件中的 `architectures` 匹配。"
#: ../../developer_guide/modeling/adding_a_new_model.md:221
msgid "Step 3: Verification"
msgstr "第 3 步:验证"
#: ../../developer_guide/modeling/adding_a_new_model.md:223
msgid "Case 1: Overriding Existing vLLM Model Architecture"
msgstr "案例 1重载已有的 vLLM 模型架构"
#: ../../developer_guide/modeling/adding_a_new_model.md:225
msgid ""
"If you're registering a customized model architecture based on vllm's "
"existing implementation (overriding vllm's original class), when executing "
"vllm offline/online inference (using any model), you'll observe warning logs"
" similar to the following output from "
"`vllm/models_executor/models/registry.py`."
msgstr ""
"如果你基于 vllm 的现有实现注册了一个自定义的模型架构(覆盖了 vllm 的原始类),在执行 vllm "
"的离线/在线推理(无论使用哪个模型)时,你会看到类似于 `vllm/models_executor/models/registry.py` "
"输出的警告日志。"
#: ../../developer_guide/modeling/adding_a_new_model.md:231
msgid "Case 2: Registering New Model Architecture"
msgstr "案例2注册新模型架构"
#: ../../developer_guide/modeling/adding_a_new_model.md:233
msgid ""
"If you're registering a novel model architecture not present in vllm "
"(creating a completely new class), current logs won't provide explicit "
"confirmation by default. It's recommended to add the following logging "
"statement at the end of the `register_model` method in "
"`vllm/models_executor/models/registry.py`."
msgstr ""
"如果你注册了 vllm 中不存在的新模型架构(创建一个全新的类),当前日志默认不会提供明确的确认信息。建议在 "
"`vllm/models_executor/models/registry.py` 文件中的 `register_model` "
"方法末尾添加如下日志语句。"
#: ../../developer_guide/modeling/adding_a_new_model.md:239
msgid ""
"After adding this line, you will see confirmation logs shown below when "
"running vllm offline/online inference (using any model)."
msgstr "添加这一行之后,当你运行 vllm 离线/在线推理(使用任何模型)时,将会看到如下确认日志。"
#: ../../developer_guide/modeling/adding_a_new_model.md:245
msgid ""
"This log output confirms your novel model architecture has been successfully"
" registered in vllm."
msgstr "该日志输出确认了你的新模型架构已成功在 vllm 中注册。"
#: ../../developer_guide/modeling/adding_a_new_model.md:247
msgid "Step 4: Testing"
msgstr "第4步测试"
#: ../../developer_guide/modeling/adding_a_new_model.md:249
msgid ""
"After adding a new model, we should do basic functional test (offline/online"
" inference), accuracy test and performance benchmark for the model."
msgstr "在添加新模型后,我们应对该模型进行基本功能测试(离线/在线推理)、准确率测试和性能基准测试。"
#: ../../developer_guide/modeling/adding_a_new_model.md:251
msgid "Find more details at:"
msgstr "更多详情请见:"
#: ../../developer_guide/modeling/adding_a_new_model.md:253
msgid ""
"[Accuracy test guide](https://vllm-"
"kunlun.readthedocs.io/en/latest/developer_guide/evaluation/index.html)"
msgstr ""
"[精度测试指南](https://vllm-"
"kunlun.readthedocs.io/en/latest/developer_guide/evaluation/index.html)"
#: ../../developer_guide/modeling/adding_a_new_model.md:254
msgid ""
"[Performance benchmark guide](https://vllm-"
"kunlun.readthedocs.io/en/latest/developer_guide/performance/performance_benchmark.html)"
msgstr ""
"[性能基准指南](https://vllm-"
"kunlun.readthedocs.io/en/latest/developer_guide/performance/performance_benchmark.html)"
#: ../../developer_guide/modeling/adding_a_new_model.md:256
msgid "Step 5: Updating Supported Models Doc"
msgstr "第5步更新支持的模型文档"
#: ../../developer_guide/modeling/adding_a_new_model.md:258
msgid ""
"At last, if all the steps above are completed, you should add the new model "
"into our [Supported Models](https://vllm-"
"kunlun.readthedocs.io/en/latest/user_guide/supported_models.html) doc."
msgstr ""
"最后,如果以上所有步骤都已完成,你应该将新模型添加到我们的[支持的模型](https://vllm-"
"kunlun.readthedocs.io/en/latest/user_guide/supported_models.html)文档中。"

View File

@@ -0,0 +1,29 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/modeling/adding_a_new_multimodal_model.md:1
msgid "Adding a New Multi-Modal Model"
msgstr "添加新的多模态模型"
#: ../../developer_guide/modeling/adding_a_new_multimodal_model.md:3
msgid "**_Comming soon ..._**"
msgstr "**_敬请期待 ..._**"

View File

@@ -0,0 +1,32 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/modeling/index.md:1
#: ../../developer_guide/modeling/index.md:5
msgid "Modeling"
msgstr "新模型"
#: ../../developer_guide/modeling/index.md:3
msgid ""
"This section provides tutorials of how to implement and register a new model"
" into vllm-kunlun."
msgstr "本节提供了如何在 vllm-kunlun 中实现并注册新模型的教程。"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
#: ../../developer_guide/performance/index.md:1
#: ../../developer_guide/performance/index.md:3
msgid "Performance"
msgstr "性能"

View File

@@ -0,0 +1,26 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/performance/optimization_and_tuning.md:1
msgid "Optimization and Tuning"
msgstr ""

View File

@@ -0,0 +1,92 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/performance/performance_benchmark.md:1
msgid "Performance Benchmark"
msgstr "性能基准"
#~ msgid ""
#~ "This document details the benchmark "
#~ "methodology for vllm-kunlun, aimed at"
#~ " evaluating the performance under a "
#~ "variety of workloads. To maintain "
#~ "alignment with vLLM, we use the "
#~ "[benchmark](https://github.com/vllm-"
#~ "project/vllm/tree/main/benchmarks) script provided "
#~ "by the vllm project."
#~ msgstr ""
#~ "本文档详细说明了 vllm-kunlun 的基准测试方法,旨在评估其在多种工作负载下的性能。为了与"
#~ " vLLM 保持一致,我们使用 vllm 项目提供的 "
#~ "[benchmark](https://github.com/vllm-"
#~ "project/vllm/tree/main/benchmarks) 脚本。"
#~ msgid ""
#~ "**Benchmark Coverage**: We measure offline "
#~ "e2e latency and throughput, and "
#~ "fixed-QPS online serving benchmarks, for"
#~ " more details see [vllm-kunlun "
#~ "benchmark scripts](https://github.com/vllm-project"
#~ "/vllm-kunlun/tree/main/benchmarks)."
#~ msgstr ""
#~ "**基准测试覆盖范围**:我们测量离线端到端延迟和吞吐量,以及固定 QPS 的在线服务基准测试。更多详情请参见"
#~ " [vllm-kunlun 基准测试脚本](https://github.com/vllm-"
#~ "project/vllm-kunlun/tree/main/benchmarks)。"
#~ msgid "1. Run docker container"
#~ msgstr "1. 运行 docker 容器"
#~ msgid "2. Install dependencies"
#~ msgstr "2. 安装依赖项"
#~ msgid "3. (Optional)Prepare model weights"
#~ msgstr "3.(可选)准备模型权重"
#~ msgid ""
#~ "For faster running speed, we recommend"
#~ " downloading the model in advance"
#~ msgstr "为了更快的运行速度,建议提前下载模型:"
#~ msgid ""
#~ "You can also replace all model "
#~ "paths in the [json](https://github.com/vllm-"
#~ "project/vllm-kunlun/tree/main/benchmarks/tests) files "
#~ "with your local paths:"
#~ msgstr ""
#~ "你也可以将 [json](https://github.com/vllm-project/vllm-"
#~ "kunlun/tree/main/benchmarks/tests) 文件中的所有模型路径替换为你的本地路径:"
#~ msgid "4. Run benchmark script"
#~ msgstr "4. 运行基准测试脚本"
#~ msgid "Run benchmark script:"
#~ msgstr "运行基准测试脚本:"
#~ msgid "After about 10 mins, the output is as shown below:"
#~ msgstr "大约 10 分钟后,输出如下所示:"
#~ msgid ""
#~ "The result json files are generated "
#~ "into the path `benchmark/results` These "
#~ "files contain detailed benchmarking results"
#~ " for further analysis."
#~ msgstr "结果 json 文件会生成到路径 `benchmark/results`。这些文件包含了用于进一步分析的详细基准测试结果。"

View File

@@ -0,0 +1,86 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/developer_guide/performance/profile_execute_duration.md:1
msgid "Profile Execute Duration"
msgstr "配置执行持续时间"
#~ msgid ""
#~ "The execution duration of each stage "
#~ "(including pre/post-processing, model forward,"
#~ " etc.) usually needs to be captured"
#~ " during a complete inference process. "
#~ "Typically, this is done by using "
#~ "`torch.xpu.synchronize()` and obtaining CPU "
#~ "timestamps, which increases the performance"
#~ " overhead of host/device synchronization."
#~ msgstr ""
#~ "在完整的推理过程中,通常需要记录每个阶段(包括前/后处理、模型前向等)的执行时长。一般通过使用 "
#~ "`torch.xpu.synchronize()` 并获取 CPU "
#~ "时间戳来实现,这会增加主机/设备同步的性能开销。"
#~ msgid ""
#~ "**To reduce the performance overhead, we"
#~ " add this feature, using the XPU "
#~ "event timestamp mechanism to observe the"
#~ " device execution time asynchronously.**"
#~ msgstr "**为了减少性能开销,我们添加了此功能,使用 XPU 事件时间戳机制异步观测设备的执行时间。**"
#~ msgid "Usage"
#~ msgstr "用法"
#~ msgid ""
#~ "Use the environment variable "
#~ "`VLLM_KUNLUN_MODEL_EXECUTE_TIME_OBSERVE` to enable "
#~ "this feature."
#~ msgstr "使用环境变量 `VLLM_KUNLUN_MODEL_EXECUTE_TIME_OBSERVE` 来启用此功能。"
#~ msgid ""
#~ "Use the non-blocking API "
#~ "`ProfileExecuteDuration().capture_async` to set "
#~ "observation points asynchronously when you "
#~ "need to observe the execution duration."
#~ msgstr ""
#~ "当你需要观察执行时长时,可以使用非阻塞 API "
#~ "`ProfileExecuteDuration().capture_async` 异步设置观察点。"
#~ msgid ""
#~ "Use the blocking API "
#~ "`ProfileExecuteDuration().pop_captured_sync` at an "
#~ "appropriate time to get and print "
#~ "the execution durations of all observed"
#~ " stages."
#~ msgstr ""
#~ "在适当的时机使用阻塞式 API "
#~ "`ProfileExecuteDuration().pop_captured_sync` 获取并打印所有已观察到阶段的执行时长。"
#~ msgid ""
#~ "**We have instrumented the key inference"
#~ " stages (including pre-processing, model"
#~ " forward pass, etc.) for execute "
#~ "duration profiling. Execute the script "
#~ "as follows:**"
#~ msgstr "**我们已经对关键的推理阶段(包括预处理、模型前向传递等)进行了执行时长分析的检测。请按如下方式执行脚本:**"
#~ msgid "Example Output"
#~ msgstr "示例输出"