提交vllm0.11.0开发分支

This commit is contained in:
chenyili
2025-12-10 17:51:24 +08:00
parent deab7dd0b6
commit 7c22d621fb
175 changed files with 31856 additions and 8683 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,228 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/community/governance.md:1
msgid "Governance"
msgstr "治理"
#: ../../source/community/governance.md:3
msgid "Mission"
msgstr "使命"
#~ msgid ""
#~ "As a vital component of vLLM, the"
#~ " vLLM Kunlun project is dedicated to"
#~ " providing an easy, fast, and cheap"
#~ " LLM Serving for Everyone on Kunlun"
#~ " XPU, and to actively contribute to"
#~ " the enrichment of vLLM."
#~ msgstr ""
#~ "作为 vLLM 的重要组成部分vLLM Kunlun 项目致力于为所有人在 "
#~ "Kunlun XPU 上提供简单、快速且低成本的大语言模型服务,并积极促进 vLLM "
#~ "的丰富发展。"
#~ msgid "Principles"
#~ msgstr "原则"
#~ msgid ""
#~ "vLLM Kunlun follows the vLLM community's"
#~ " code of conduct[vLLM - CODE OF "
#~ "CONDUCT](https://github.com/vllm-"
#~ "project/vllm/blob/main/CODE_OF_CONDUCT.md)"
#~ msgstr ""
#~ "vLLM Kunlun 遵循 vLLM 社区的行为准则:[vLLM - "
#~ "行为准则](https://github.com/vllm-"
#~ "project/vllm/blob/main/CODE_OF_CONDUCT.md)"
#~ msgid "Governance - Mechanics"
#~ msgstr "治理 - 机制"
#~ msgid ""
#~ "vLLM Kunlun is an open-source "
#~ "project under the vLLM community, where"
#~ " the authority to appoint roles is"
#~ " ultimately determined by the vLLM "
#~ "community. It adopts a hierarchical "
#~ "technical governance structure."
#~ msgstr "vLLM Kunlun 是 vLLM 社区下的一个开源项目,其角色任命权最终由 vLLM 社区决定。它采用分层的技术治理结构。"
#~ msgid "Contributor:"
#~ msgstr "贡献者:"
#~ msgid ""
#~ "**Responsibility:** Help new contributors on"
#~ " boarding, handle and respond to "
#~ "community questions, review RFCs, code"
#~ msgstr "**职责:** 帮助新贡献者加入处理和回复社区问题审查RFC和代码"
#~ msgid ""
#~ "**Requirements:** Complete at least 1 "
#~ "contribution. Contributor is someone who "
#~ "consistently and actively participates in "
#~ "a project, included but not limited "
#~ "to issue/review/commits/community involvement."
#~ msgstr "**要求:** 完成至少1次贡献。贡献者是指持续且积极参与项目的人包括但不限于问题、评审、提交和社区参与。"
#~ msgid ""
#~ "Contributors will be empowered [vllm-"
#~ "project/vllm-kunlun](https://github.com/vllm-project"
#~ "/vllm-kunlun) Github repo `Triage` "
#~ "permissions (`Can read and clone this"
#~ " repository. Can also manage issues "
#~ "and pull requests`) to help community"
#~ " developers collaborate more efficiently."
#~ msgstr ""
#~ "贡献者将被赋予 [vllm-project/vllm-"
#~ "kunlun](https://github.com/vllm-project/vllm-kunlun) "
#~ "Github 仓库的 `Triage` "
#~ "权限(`可读取和克隆此仓库。还可以管理问题和拉取请求`),以帮助社区开发者更加高效地协作。"
#~ msgid "Maintainer:"
#~ msgstr "维护者:"
#~ msgid ""
#~ "**Responsibility:** Develop the project's "
#~ "vision and mission. Maintainers are "
#~ "responsible for driving the technical "
#~ "direction of the entire project and "
#~ "ensuring its overall success, possessing "
#~ "code merge permissions. They formulate "
#~ "the roadmap, review contributions from "
#~ "community members, continuously contribute "
#~ "code, and actively engage in community"
#~ " activities (such as regular "
#~ "meetings/events)."
#~ msgstr ""
#~ "**责任:** "
#~ "制定项目的愿景和使命。维护者负责引领整个项目的技术方向并确保其整体成功,拥有代码合并权限。他们制定路线图,审核社区成员的贡献,持续贡献代码,并积极参与社区活动(如定期会议/活动)。"
#~ msgid ""
#~ "**Requirements:** Deep understanding of vLLM"
#~ " and vLLM Kunlun codebases, with a"
#~ " commitment to sustained code "
#~ "contributions. Competency in design/development/PR"
#~ " review workflows."
#~ msgstr ""
#~ "**要求:** 深入理解 vLLMvLLM Kunlun "
#~ "代码库,并承诺持续贡献代码。具备 ‌设计/开发/PR 审核流程‌ 的能力。"
#~ msgid ""
#~ "**Review Quality:** Actively participate in"
#~ " community code reviews, ensuring high-"
#~ "quality code integration."
#~ msgstr "**评审质量:** 积极参与社区代码评审,确保高质量的代码集成。"
#~ msgid ""
#~ "**Quality Contribution:** Successfully develop "
#~ "and deliver at least one major "
#~ "feature while maintaining consistent high-"
#~ "quality contributions."
#~ msgstr "**质量贡献‌:** 成功开发并交付至少一个主要功能,同时持续保持高质量的贡献。"
#~ msgid ""
#~ "**Community Involvement:** Actively address "
#~ "issues, respond to forum inquiries, "
#~ "participate in discussions, and engage "
#~ "in community-driven tasks."
#~ msgstr "**社区参与:** 积极解决问题,回复论坛询问,参与讨论,并参与社区驱动的任务。"
#~ msgid ""
#~ "Requires approval from existing Maintainers."
#~ " The vLLM community has the final "
#~ "decision-making authority."
#~ msgstr "需要现有维护者的批准。vLLM社区拥有最终决策权。"
#~ msgid ""
#~ "Maintainer will be empowered [vllm-"
#~ "project/vllm-kunlun](https://github.com/vllm-project"
#~ "/vllm-kunlun) Github repo write permissions"
#~ " (`Can read, clone, and push to "
#~ "this repository. Can also manage issues"
#~ " and pull requests`)."
#~ msgstr ""
#~ "维护者将被授予 [vllm-project/vllm-"
#~ "kunlun](https://github.com/vllm-project/vllm-kunlun) "
#~ "Github 仓库的写入权限(`可以读取、克隆和推送到此仓库。还可以管理问题和拉取请求`)。"
#~ msgid "Nominating and Removing Maintainers"
#~ msgstr "提名和移除维护者"
#~ msgid "The Principles"
#~ msgstr "原则"
#~ msgid ""
#~ "Membership in vLLM Kunlun is given "
#~ "to individuals on merit basis after "
#~ "they demonstrated strong expertise of "
#~ "the vLLM / vLLM Kunlun through "
#~ "contributions, reviews and discussions."
#~ msgstr ""
#~ "vLLM Kunlun 的成员资格是基于个人能力授予的,只有在通过贡献、评审和讨论展示出对 vLLM"
#~ " / vLLM Kunlun 的深厚专业知识后,才可获得。"
#~ msgid ""
#~ "For membership in the maintainer group"
#~ " the individual has to demonstrate "
#~ "strong and continued alignment with the"
#~ " overall vLLM / vLLM Kunlun "
#~ "principles."
#~ msgstr "要成为维护者组成员,个人必须表现出与 vLLM / vLLM Kunlun 总体原则的高度一致并持续支持。"
#~ msgid ""
#~ "Light criteria of moving module "
#~ "maintenance to emeritus status if they"
#~ " dont actively participate over long "
#~ "periods of time."
#~ msgstr "如果模块维护人员在长时间内没有积极参与,可根据较宽松的标准将其维护状态转为“荣誉”状态。"
#~ msgid "The membership is for an individual, not a company."
#~ msgstr "该会员资格属于个人,而非公司。"
#~ msgid "Nomination and Removal"
#~ msgstr "提名与罢免"
#~ msgid ""
#~ "Nomination: Anyone can nominate someone "
#~ "to become a maintainer (include self-"
#~ "nominate). All existing maintainers are "
#~ "responsible for evaluating the nomination. "
#~ "The nominator should provide nominee's "
#~ "info around the strength of the "
#~ "candidate to be a maintainer, include"
#~ " but not limited to review quality,"
#~ " quality contribution, community involvement."
#~ msgstr "提名:任何人都可以提名他人成为维护者(包括自荐)。所有现有维护者都有责任评估提名。提名人应提供被提名人成为维护者的相关优势信息,包括但不限于评审质量、优质贡献、社区参与等。"
#~ msgid ""
#~ "Removal: Anyone can nominate a person"
#~ " to be removed from maintainer "
#~ "position (include self-nominate). All "
#~ "existing maintainers are responsible for "
#~ "evaluating the nomination. The nominator "
#~ "should provide nominee's info, include "
#~ "but not limited to lack of "
#~ "activity, conflict with the overall "
#~ "direction and other information that "
#~ "makes them unfit to be a "
#~ "maintainer."
#~ msgstr "移除:任何人都可以提名某人被移出维护者职位(包括自荐)。所有现有维护者都有责任评估该提名。提名者应提供被提名人的相关信息,包括但不限于缺乏活动、与整体方向冲突以及使其不适合作为维护者的其他信息。"

View File

@@ -0,0 +1,120 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/community/user_stories/index.md:1
#, fuzzy
msgid "User stories"
msgstr "用户故事"
#~ msgid "More details"
#~ msgstr "更多细节"
#~ msgid ""
#~ "Read case studies on how users and"
#~ " developers solves real, everyday problems"
#~ " with vLLM Kunlun"
#~ msgstr "阅读案例研究,了解用户和开发者如何使用 vLLM Kunlun 解决实际日常问题。"
#~ msgid ""
#~ "[LLaMA-Factory](./llamafactory.md) is an "
#~ "easy-to-use and efficient platform "
#~ "for training and fine-tuning large "
#~ "language models, it supports vLLM Kunlun"
#~ " to speed up inference since "
#~ "[LLaMA-Factory#7739](https://github.com/hiyouga/LLaMA-"
#~ "Factory/pull/7739), gain 2x performance "
#~ "enhancement of inference."
#~ msgstr ""
#~ "[LLaMA-Factory](./llamafactory.md) "
#~ "是一个易于使用且高效的大语言模型训练与微调平台,自 [LLaMA-"
#~ "Factory#7739](https://github.com/hiyouga/LLaMA-"
#~ "Factory/pull/7739) 起支持 vLLM Kunlun 加速推理,推理性能提升"
#~ " 2 倍。"
#~ msgid ""
#~ "[Huggingface/trl](https://github.com/huggingface/trl) is a"
#~ " cutting-edge library designed for "
#~ "post-training foundation models using "
#~ "advanced techniques like SFT, PPO and"
#~ " DPO, it uses vLLM Kunlun since "
#~ "[v0.17.0](https://github.com/huggingface/trl/releases/tag/v0.17.0) "
#~ "to support RLHF on Kunlun XPU."
#~ msgstr ""
#~ "[Huggingface/trl](https://github.com/huggingface/trl) "
#~ "是一个前沿的库,专为使用 SFT、PPO 和 DPO "
#~ "等先进技术对基础模型进行后训练而设计。从 "
#~ "[v0.17.0](https://github.com/huggingface/trl/releases/tag/v0.17.0) "
#~ "版本开始,该库利用 vLLM Kunlun 来支持在 Kunlun XPU"
#~ " 上进行 RLHF。"
#~ msgid ""
#~ "[MindIE Turbo](https://pypi.org/project/mindie-turbo) "
#~ "is an LLM inference engine acceleration"
#~ " plug-in library developed by Baidu"
#~ " on Kunlun hardware, which includes "
#~ "self-developed large language model "
#~ "optimization algorithms and optimizations "
#~ "related to the inference engine "
#~ "framework. It supports vLLM Kunlun since"
#~ " "
#~ "[2.0rc1](https://www.hikunlun.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev"
#~ "/mindie-turbo-0001.html)."
#~ msgstr ""
#~ "[MindIE Turbo](https://pypi.org/project/mindie-turbo) "
#~ "是华为在昇腾硬件上开发的一款用于加速LLM推理引擎的插件库包含自主研发的大语言模型优化算法及与推理引擎框架相关的优化。从 "
#~ "[2.0rc1](https://www.hikunlun.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev"
#~ "/mindie-turbo-0001.html) 起,支持 vLLM Kunlun。"
#~ msgid ""
#~ "[GPUStack](https://github.com/gpustack/gpustack) is an "
#~ "open-source GPU cluster manager for "
#~ "running AI models. It supports vLLM "
#~ "Kunlun since "
#~ "[v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2),"
#~ " see more GPUStack performance evaluation"
#~ " info on "
#~ "[link](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)."
#~ msgstr ""
#~ "[GPUStack](https://github.com/gpustack/gpustack) 是一个开源的 "
#~ "GPU 集群管理器,用于运行 AI 模型。从 "
#~ "[v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2) "
#~ "版本开始支持 vLLM Kunlun更多 GPUStack 性能评测信息见 "
#~ "[链接](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)。"
#~ msgid ""
#~ "[verl](https://github.com/volcengine/verl) is a "
#~ "flexible, efficient and production-ready "
#~ "RL training library for large language"
#~ " models (LLMs), uses vLLM Kunlun "
#~ "since "
#~ "[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0), "
#~ "see more info on [verl x Kunlun"
#~ " "
#~ "Quickstart](https://verl.readthedocs.io/en/latest/kunlun_tutorial/kunlun_quick_start.html)."
#~ msgstr ""
#~ "[verl](https://github.com/volcengine/verl) "
#~ "是一个灵活、高效且可用于生产环境的大型语言模型LLM强化学习训练库自 "
#~ "[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0) "
#~ "起支持 vLLM Kunlun更多信息请参见 [verl x Kunlun"
#~ " "
#~ "快速上手](https://verl.readthedocs.io/en/latest/kunlun_tutorial/kunlun_quick_start.html)。"

View File

@@ -0,0 +1,108 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/community/user_stories/llamafactory.md:1
msgid "LLaMA-Factory"
msgstr "LLaMA-Factory"
#: ../../source/community/user_stories/llamafactory.md:3
#, fuzzy
msgid "**Introduction**"
msgstr "**关于 / 介绍**"
#: ../../source/community/user_stories/llamafactory.md:5
msgid ""
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) is an easy-to-"
"use and efficient platform for training and fine-tuning large language "
"models. With LLaMA-Factory, you can fine-tune hundreds of pre-trained "
"models locally without writing any code."
msgstr ""
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) "
"是一个易于使用且高效的平台,用于训练和微调大型语言模型。有了 LLaMA-"
"Factory你可以在本地对数百个预训练模型进行微调无需编写任何代码。"
#: ../../source/community/user_stories/llamafactory.md:7
#, fuzzy
msgid ""
"LLaMA-Facotory users need to evaluate and inference the model after fine-"
"tuning."
msgstr "LLaMA-Facotory 用户需要在对模型进行微调后对模型进行评估和推理。"
#: ../../source/community/user_stories/llamafactory.md:9
#, fuzzy
msgid "**Business challenge**"
msgstr "**业务挑战**"
#: ../../source/community/user_stories/llamafactory.md:11
#, fuzzy
msgid ""
"LLaMA-Factory uses Transformers to perform inference on Kunlun XPUs, but "
"the speed is slow."
msgstr "LLaMA-Factory 使用 transformers 在 Kunlun XPU 上进行推理,但速度较慢。"
#: ../../source/community/user_stories/llamafactory.md:13
#, fuzzy
msgid "**Benefits with vLLM Kunlun**"
msgstr "**通过 vLLM Kunlun 解决挑战与收益**"
#: ../../source/community/user_stories/llamafactory.md:15
msgid ""
"With the joint efforts of LLaMA-Factory and vLLM Kunlun ([LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739)), "
"LLaMA-Factory has achieved significant performance gains during model "
"inference. Benchmark results show that its inference speed is now up to "
"2× faster compared to the Transformers implementation."
msgstr ""
#: ../../source/community/user_stories/llamafactory.md:17
msgid "**Learn more**"
msgstr "**了解更多**"
#: ../../source/community/user_stories/llamafactory.md:19
#, fuzzy
msgid ""
"See more details about LLaMA-Factory and how it uses vLLM Kunlun for "
"inference on Kunlun XPUs in [LLaMA-Factory Kunlun XPU "
"Inference](https://llamafactory.readthedocs.io/en/latest/advanced/npu_inference.html)."
msgstr ""
"在以下文档中查看更多关于 LLaMA-Factory 以及其如何在 Kunlun XPU 上使用 vLLM Kunlun 进行推理的信息"
"[LLaMA-Factory Kunlun XPU "
"推理](https://llamafactory.readthedocs.io/en/latest/advanced/npu_inference.html)。"
#~ msgid ""
#~ "With the joint efforts of LLaMA-"
#~ "Factory and vLLM Kunlun ([LLaMA-"
#~ "Factory#7739](https://github.com/hiyouga/LLaMA-"
#~ "Factory/pull/7739)), the performance of "
#~ "LLaMA-Factory in the model inference "
#~ "stage has been significantly improved. "
#~ "According to the test results, the "
#~ "inference speed of LLaMA-Factory has "
#~ "been increased to 2x compared to "
#~ "the transformers version."
#~ msgstr ""
#~ "在 LLaMA-Factory 和 vLLM Kunlun "
#~ "的共同努力下(参见 [LLaMA-Factory#7739](https://github.com/hiyouga"
#~ "/LLaMA-Factory/pull/7739)LLaMA-Factory "
#~ "在模型推理阶段的性能得到了显著提升。根据测试结果LLaMA-Factory 的推理速度相比 "
#~ "transformers 版本提升到了 2 倍。"

View File

@@ -0,0 +1,575 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2025, vllm-kunlun team
# This file is distributed under the same license as the vllm-kunlun
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-kunlun\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-11-10 16:59+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.17.0\n"
#: ../../source/community/versioning_policy.md:1
msgid "Versioning policy"
msgstr "版本管理策略"
#~ msgid ""
#~ "Starting with vLLM 0.7.x, the vLLM "
#~ "Kunlun Plugin ([vllm-project/vllm-"
#~ "kunlun](https://github.com/vllm-project/vllm-kunlun)) "
#~ "project follows the [PEP "
#~ "440](https://peps.python.org/pep-0440/) to publish "
#~ "matching with vLLM ([vllm-"
#~ "project/vllm](https://github.com/vllm-project/vllm))."
#~ msgstr ""
#~ "从 vLLM 0.7.x 开始vLLM Kunlun 插件([vllm-"
#~ "project/vllm-kunlun](https://github.com/vllm-project"
#~ "/vllm-kunlun))项目遵循 [PEP "
#~ "440](https://peps.python.org/pep-0440/) ,以与 vLLM[vllm-"
#~ "project/vllm](https://github.com/vllm-project/vllm))版本匹配发布。"
#~ msgid "vLLM Kunlun Plugin versions"
#~ msgstr "vLLM Kunlun 插件版本"
#~ msgid ""
#~ "Each vLLM Kunlun release will be "
#~ "versioned: `v[major].[minor].[micro][rcN][.postN]` (such"
#~ " as `v0.7.3rc1`, `v0.7.3`, `v0.7.3.post1`)"
#~ msgstr ""
#~ "每个 vLLM Kunlun "
#~ "版本将采用以下版本格式:`v[major].[minor].[micro][rcN][.postN]`(例如 "
#~ "`v0.7.3rc1`、`v0.7.3`、`v0.7.3.post1`"
#~ msgid ""
#~ "**Final releases**: will typically be "
#~ "released every **3 months**, will take"
#~ " the vLLM upstream release plan and"
#~ " Kunlun software product release plan "
#~ "into comprehensive consideration."
#~ msgstr "**正式版本**:通常每**3个月**发布一次,将综合考虑 vLLM 上游发行计划和昇腾软件产品发行计划。"
#~ msgid ""
#~ "**Pre releases**: will typically be "
#~ "released **on demand**, ending with rcN,"
#~ " represents the Nth release candidate "
#~ "version, to support early testing by "
#~ "our users prior to a final "
#~ "release."
#~ msgstr "**预发布版本**:通常会**按需发布**,以 rcN 结尾表示第N个候选发布版本旨在支持用户在正式发布前进行早期测试。"
#~ msgid ""
#~ "**Post releases**: will typically be "
#~ "released **on demand** to support to "
#~ "address minor errors in a final "
#~ "release. It's different from [PEP-440 "
#~ "post release note](https://peps.python.org/pep-0440"
#~ "/#post-releases) suggestion, it will "
#~ "contain actual bug fixes considering "
#~ "that the final release version should"
#~ " be matched strictly with the vLLM"
#~ " final release version "
#~ "(`v[major].[minor].[micro]`). The post version "
#~ "has to be published as a patch "
#~ "version of the final release."
#~ msgstr ""
#~ "**后续版本**:通常会根据需要发布,以支持解决正式发布中的小错误。这与 [PEP-440 "
#~ "的后续版本说明](https://peps.python.org/pep-0440/#post-releases) "
#~ "建议不同,它将包含实际的 bug 修复,因为最终发布版本应严格与 vLLM "
#~ "的最终发布版本(`v[major].[minor].[micro]`)匹配。后续版本必须以正式发布的补丁版本形式发布。"
#~ msgid "For example:"
#~ msgstr "例如:"
#~ msgid ""
#~ "`v0.7.x`: it's the first final release"
#~ " to match the vLLM `v0.7.x` version."
#~ msgstr "`v0.7.x`:这是第一个与 vLLM `v0.7.x` 版本相匹配的正式发布版本。"
#~ msgid "`v0.7.3rc1`: will be the first pre version of vLLM Kunlun."
#~ msgstr "`v0.7.3rc1`:将会是 vLLM Kunlun 的第一个预发布版本。"
#~ msgid ""
#~ "`v0.7.3.post1`: will be the post release"
#~ " if the `v0.7.3` release has some "
#~ "minor errors."
#~ msgstr "`v0.7.3.post1`:如果 `v0.7.3` 版本发布有一些小错误,将作为后续修正版发布。"
#~ msgid "Release Compatibility Matrix"
#~ msgstr "版本兼容性矩阵"
#~ msgid "Following is the Release Compatibility Matrix for vLLM Kunlun Plugin:"
#~ msgstr "以下是 vLLM Kunlun 插件的版本兼容性矩阵:"
#~ msgid "vLLM Kunlun"
#~ msgstr "vLLM Kunlun"
#~ msgid "vLLM"
#~ msgstr "vLLM"
#~ msgid "Python"
#~ msgstr "Python"
#~ msgid "Stable CANN"
#~ msgstr "Stable CANN"
#~ msgid "PyTorch/torch_npu"
#~ msgstr "PyTorch/torch_npu"
#~ msgid "MindIE Turbo"
#~ msgstr "MindIE Turbo"
#~ msgid "v0.9.2rc1"
#~ msgstr "v0.9.2rc1"
#~ msgid "v0.9.2"
#~ msgstr "v0.9.2"
#~ msgid ">= 3.9, < 3.12"
#~ msgstr ">= 3.9< 3.12"
#~ msgid "8.1.RC1"
#~ msgstr "8.1.RC1"
#~ msgid "2.5.1 / 2.5.1.post1.dev20250619"
#~ msgstr "2.5.1 / 2.5.1.post1.dev20250619"
#~ msgid "v0.9.1rc1"
#~ msgstr "v0.9.1rc1"
#~ msgid "v0.9.1"
#~ msgstr "v0.9.1"
#~ msgid "2.5.1 / 2.5.1.post1.dev20250528"
#~ msgstr "2.5.1 / 2.5.1.post1.dev20250528"
#~ msgid "v0.9.0rc2"
#~ msgstr "v0.9.0rc2"
#~ msgid "v0.9.0"
#~ msgstr "v0.9.0"
#~ msgid "2.5.1 / 2.5.1"
#~ msgstr "2.5.1 / 2.5.1"
#~ msgid "v0.9.0rc1"
#~ msgstr "v0.9.0rc1"
#~ msgid "v0.8.5rc1"
#~ msgstr "v0.8.5rc1"
#~ msgid "v0.8.5.post1"
#~ msgstr "v0.8.5.post1"
#~ msgid "v0.8.4rc2"
#~ msgstr "v0.8.4rc2"
#~ msgid "v0.8.4"
#~ msgstr "v0.8.4"
#~ msgid "8.0.0"
#~ msgstr "8.0.0"
#~ msgid "v0.7.3.post1"
#~ msgstr "v0.7.3.post1"
#~ msgid "v0.7.3"
#~ msgstr "v0.7.3"
#~ msgid "2.0rc1"
#~ msgstr "2.0候选版本1"
#~ msgid "Release cadence"
#~ msgstr "发布节奏"
#~ msgid "release window"
#~ msgstr "发布窗口"
#~ msgid "Date"
#~ msgstr "日期"
#~ msgid "Event"
#~ msgstr "事件"
#~ msgid "2025.07.11"
#~ msgstr "2025.07.11"
#~ msgid "Release candidates, v0.9.2rc1"
#~ msgstr "候选发布版本v0.9.2rc1"
#~ msgid "2025.06.22"
#~ msgstr "2025.06.22"
#~ msgid "Release candidates, v0.9.1rc1"
#~ msgstr "候选发布版本v0.9.1rc1"
#~ msgid "2025.06.10"
#~ msgstr "2025.06.10"
#~ msgid "Release candidates, v0.9.0rc2"
#~ msgstr "候选发布版本v0.9.0rc2"
#~ msgid "2025.06.09"
#~ msgstr "2025.06.09"
#~ msgid "Release candidates, v0.9.0rc1"
#~ msgstr "候选发布版本本v0.9.0rc1"
#~ msgid "2025.05.29"
#~ msgstr "2025.05.29"
#~ msgid "v0.7.x post release, v0.7.3.post1"
#~ msgstr "v0.7.x 补丁版v0.7.3.post1"
#~ msgid "2025.05.08"
#~ msgstr "2025.05.08"
#~ msgid "v0.7.x Final release, v0.7.3"
#~ msgstr "v0.7.x 正式版v0.7.3"
#~ msgid "2025.05.06"
#~ msgstr "2025.05.06"
#~ msgid "Release candidates, v0.8.5rc1"
#~ msgstr "候选发布版本v0.8.5rc1"
#~ msgid "2025.04.28"
#~ msgstr "2025.04.28"
#~ msgid "Release candidates, v0.8.4rc2"
#~ msgstr "候选发布版本v0.8.4rc2"
#~ msgid "2025.04.18"
#~ msgstr "2025.04.18"
#~ msgid "Release candidates, v0.8.4rc1"
#~ msgstr "候选发布版本v0.8.4rc1"
#~ msgid "2025.03.28"
#~ msgstr "2025.03.28"
#~ msgid "Release candidates, v0.7.3rc2"
#~ msgstr "候选发布版本v0.7.3rc2"
#~ msgid "2025.03.14"
#~ msgstr "2025.03.14"
#~ msgid "Release candidates, v0.7.3rc1"
#~ msgstr "候选发布版本v0.7.3rc1"
#~ msgid "2025.02.19"
#~ msgstr "2025.02.19"
#~ msgid "Release candidates, v0.7.1rc1"
#~ msgstr "候选发布版本v0.7.1rc1"
#~ msgid "Branch policy"
#~ msgstr "分支策略"
#~ msgid "vLLM Kunlun has main branch and dev branch."
#~ msgstr "vLLM Kunlun 有主分支和开发分支。"
#~ msgid ""
#~ "**main**: main branchcorresponds to the "
#~ "vLLM main branch and latest 1 or"
#~ " 2 release version. It is "
#~ "continuously monitored for quality through "
#~ "Kunlun CI."
#~ msgstr "**main**main 分支,对应 vLLM 的主分支和最新的 1 或 2 个发布版本。该分支通过 Kunlun CI 持续监控质量。"
#~ msgid ""
#~ "**vX.Y.Z-dev**: development branch, created "
#~ "with part of new releases of vLLM."
#~ " For example, `v0.7.3-dev` is the dev"
#~ " branch for vLLM `v0.7.3` version."
#~ msgstr ""
#~ "**vX.Y.Z-dev**:开发分支,是随着 vLLM 新版本的一部分一起创建的。例如,`v0.7.3-dev`"
#~ " 是 vLLM `v0.7.3` 版本的开发分支。"
#~ msgid ""
#~ "Usually, a commit should be ONLY "
#~ "first merged in the main branch, "
#~ "and then backported to the dev "
#~ "branch to reduce maintenance costs as"
#~ " much as possible."
#~ msgstr "通常,提交应该只先合并到主分支,然后再回溯合并到开发分支,以尽可能降低维护成本。"
#~ msgid "Maintenance branch and EOL:"
#~ msgstr "维护分支与生命周期结束EOL"
#~ msgid "The branch status will be in one of the following states:"
#~ msgstr "分支状态将处于以下几种状态之一:"
#~ msgid "Branch"
#~ msgstr "分支"
#~ msgid "Time frame"
#~ msgstr "时间范围"
#~ msgid "Summary"
#~ msgstr "摘要"
#~ msgid "Maintained"
#~ msgstr "维护中"
#~ msgid "Approximately 2-3 minor versions"
#~ msgstr "大约 2-3 个小版本"
#~ msgid "All bugfixes are appropriate. Releases produced, CI commitment."
#~ msgstr "所有的错误修复都是合适的。正常发布版本,持续集成承诺。"
#~ msgid "Unmaintained"
#~ msgstr "无人维护"
#~ msgid "Community interest driven"
#~ msgstr "社区兴趣驱动"
#~ msgid "All bugfixes are appropriate. No Releases produced, No CI commitment"
#~ msgstr "所有的 bug 修复都是合适的。没有发布版本不承诺持续集成CI。"
#~ msgid "End of Life (EOL)"
#~ msgstr "生命周期结束EOL"
#~ msgid "N/A"
#~ msgstr "不适用"
#~ msgid "Branch no longer accepting changes"
#~ msgstr "该分支不再接受更改"
#~ msgid "Branch state"
#~ msgstr "分支状态"
#~ msgid ""
#~ "Note that vLLM Kunlun will only be"
#~ " released for a certain vLLM release"
#~ " version rather than all versions. "
#~ "Hence, You might see only part of"
#~ " versions have dev branches (such as"
#~ " only `0.7.1-dev` / `0.7.3-dev` but "
#~ "no `0.7.2-dev`), this is as expected."
#~ msgstr ""
#~ "请注意vLLM Kunlun 只会针对某些 vLLM "
#~ "发布版本发布,而不是所有版本。因此,您可能会看到只有部分版本拥有开发分支(例如只有 `0.7.1-dev` /"
#~ " `0.7.3-dev`,而没有 `0.7.2-dev`),这是正常现象。"
#~ msgid ""
#~ "Usually, each minor version of vLLM "
#~ "(such as 0.7) will correspond to a"
#~ " vLLM Kunlun version branch and "
#~ "support its latest version (for example,"
#~ " we plan to support version 0.7.3)"
#~ " as following shown:"
#~ msgstr ""
#~ "通常vLLM 的每一个小版本(例如 0.7)都会对应一个 vLLM Kunlun "
#~ "版本分支,并支持其最新版本(例如,我们计划支持 0.7.3 版),如下所示:"
#~ msgid "Status"
#~ msgstr "状态"
#~ msgid "Note"
#~ msgstr "注释"
#~ msgid "main"
#~ msgstr "main"
#~ msgid "CI commitment for vLLM main branch and vLLM 0.9.2 branch"
#~ msgstr "vLLM 主分支和 vLLM 0.9.2 分支的 CI 承诺"
#~ msgid "v0.9.1-dev"
#~ msgstr "v0.9.1-dev"
#~ msgid "CI commitment for vLLM 0.9.1 version"
#~ msgstr "vLLM 0.9.1 版本的 CI 承诺"
#~ msgid "v0.7.3-dev"
#~ msgstr "v0.7.3-dev"
#~ msgid "CI commitment for vLLM 0.7.3 version"
#~ msgstr "vLLM 0.7.3 版本的 CI 承诺"
#~ msgid "v0.7.1-dev"
#~ msgstr "v0.7.1-dev"
#~ msgid "Replaced by v0.7.3-dev"
#~ msgstr "已被 v0.7.3-dev 替代"
#~ msgid "Backward compatibility"
#~ msgstr "向后兼容性"
#~ msgid ""
#~ "For main branch, vLLM Kunlun should "
#~ "works with vLLM main branch and "
#~ "latest 1 or 2 release version. So"
#~ " to ensure the backward compatibility, "
#~ "we will do the following:"
#~ msgstr ""
#~ "对于主分支vLLM Kunlun 应该与 vLLM 主分支以及最新的 1"
#~ " 或 2 个发布版本兼容。因此,为了确保向后兼容性,我们将执行以下操作:"
#~ msgid ""
#~ "Both main branch and target vLLM "
#~ "release is tested by Kunlun E2E "
#~ "CI. For example, currently, vLLM main"
#~ " branch and vLLM 0.8.4 are tested "
#~ "now."
#~ msgstr "主分支和目标 vLLM 发行版都经过了 Kunlun E2E CI 的测试。例如,目前正在测试 vLLM 主分支和 vLLM 0.8.4。"
#~ msgid ""
#~ "For code changes, we will make "
#~ "sure that the changes are compatible "
#~ "with the latest 1 or 2 vLLM "
#~ "release version as well. In this "
#~ "case, vLLM Kunlun introduced a version"
#~ " check machinism inner the code. "
#~ "It'll check the version of installed "
#~ "vLLM package first to decide which "
#~ "code logic to use. If users hit"
#~ " the `InvalidVersion` error, it sometimes"
#~ " means that they have installed an"
#~ " dev/editable version of vLLM package. "
#~ "In this case, we provide the env"
#~ " variable `VLLM_VERSION` to let users "
#~ "specify the version of vLLM package "
#~ "to use."
#~ msgstr ""
#~ "对于代码更改,我们也会确保这些更改与最新的 1 或 2 个 vLLM "
#~ "发行版本兼容。在这种情况下vLLM Kunlun 在代码中引入了版本检查机制。它会先检查已安装的 "
#~ "vLLM 包的版本,然后决定使用哪段代码逻辑。如果用户遇到 `InvalidVersion` "
#~ "错误,这有时意味着他们安装了 dev/可编辑版本的 vLLM 包。此时,我们提供了环境变量 "
#~ "`VLLM_VERSION`,让用户可以指定要使用的 vLLM 包版本。"
#~ msgid ""
#~ "For documentation changes, we will make"
#~ " sure that the changes are compatible"
#~ " with the latest 1 or 2 vLLM"
#~ " release version as well. Note should"
#~ " be added if there are any "
#~ "breaking changes."
#~ msgstr "对于文档更改我们会确保这些更改也兼容于最新的1个或2个 vLLM 发布版本。如果有任何重大变更,应添加说明。"
#~ msgid "Document Branch Policy"
#~ msgstr "文档分支政策"
#~ msgid ""
#~ "To reduce maintenance costs, **all "
#~ "branch documentation content should remain "
#~ "consistent, and version differences can "
#~ "be controlled via variables in "
#~ "[docs/source/conf.py](https://github.com/vllm-project/vllm-"
#~ "kunlun/blob/main/docs/source/conf.py)**. While this "
#~ "is not a simple task, it is "
#~ "a principle we should strive to "
#~ "follow."
#~ msgstr ""
#~ "为了减少维护成本,**所有分支的文档内容应保持一致,版本差异可以通过 "
#~ "[docs/source/conf.py](https://github.com/vllm-project/vllm-"
#~ "kunlun/blob/main/docs/source/conf.py) "
#~ "中的变量进行控制**。虽然这并非易事,但这是我们应当努力遵循的原则。"
#~ msgid "Version"
#~ msgstr "版本"
#~ msgid "Purpose"
#~ msgstr "用途"
#~ msgid "Code Branch"
#~ msgstr "代码分支"
#~ msgid "latest"
#~ msgstr "最新"
#~ msgid "Doc for the latest dev branch"
#~ msgstr "最新开发分支的文档"
#~ msgid "vX.Y.Z-dev (Will be `main` after the first final release)"
#~ msgstr "vX.Y.Z-dev在第一个正式版本发布后将成为 `main`"
#~ msgid "version"
#~ msgstr "版本"
#~ msgid "Doc for historical released versions"
#~ msgstr "历史版本文档"
#~ msgid "Git tags, like vX.Y.Z[rcN]"
#~ msgstr "Git 标签,如 vX.Y.Z[rcN]"
#~ msgid "stablenot yet released"
#~ msgstr "稳定版(尚未发布)"
#~ msgid "Doc for latest final release branch"
#~ msgstr "最新正式发布分支的文档"
#~ msgid "Will be `vX.Y.Z-dev` after the first official release"
#~ msgstr "首个正式发布后将会是 `vX.Y.Z-dev`"
#~ msgid "As shown above:"
#~ msgstr "如上所示:"
#~ msgid ""
#~ "`latest` documentation: Matches the current"
#~ " maintenance branch `vX.Y.Z-dev` (Will be"
#~ " `main` after the first final "
#~ "release). Continuously updated to ensure "
#~ "usability for the latest release."
#~ msgstr "`latest` 文档:匹配当前维护分支 `vX.Y.Z-dev`(在首次正式发布后将为 `main`)。持续更新,以确保适用于最新发布版本。"
#~ msgid ""
#~ "`version` documentation: Corresponds to "
#~ "specific released versions (e.g., `v0.7.3`,"
#~ " `v0.7.3rc1`). No further updates after "
#~ "release."
#~ msgstr "`version` 文档:对应特定的已发布版本(例如,`v0.7.3`、`v0.7.3rc1`)。发布后不再进行更新。"
#~ msgid ""
#~ "`stable` documentation (**not yet released**):"
#~ " Official release documentation. Updates "
#~ "are allowed in real-time after "
#~ "release, typically based on vX.Y.Z-dev. "
#~ "Once stable documentation is available, "
#~ "non-stable versions should display a "
#~ "header warning: `You are viewing the "
#~ "latest developer preview docs. Click "
#~ "here to view docs for the latest"
#~ " stable release.`."
#~ msgstr ""
#~ "`stable` 文档(**尚未发布**):官方发布版文档。发布后允许实时更新,通常基于 "
#~ "vX.Y.Z-dev。一旦稳定版文档可用非稳定版本应显示一个顶部警告`您正在查看最新的开发预览文档。点击此处查看最新稳定版本文档。`"
#~ msgid "Software Dependency Management"
#~ msgstr "软件依赖管理"
#~ msgid ""
#~ "`torch-xpu`: Kunlun Extension for "
#~ "PyTorch (torch-xpu) releases a stable"
#~ " version to [PyPi](https://pypi.org/project/torch-"
#~ "xpu) every 3 months, a development "
#~ "version (aka the POC version) every "
#~ "month, and a nightly version every "
#~ "day. The PyPi stable version **CAN** "
#~ "be used in vLLM Kunlun final "
#~ "version, the monthly dev version **ONLY"
#~ " CANN** be used in vLLM Kunlun "
#~ "RC version for rapid iteration, the "
#~ "nightly version **CANNOT** be used in"
#~ " vLLM Kunlun any version and "
#~ "branches."
#~ msgstr ""
#~ "`torch-xpu`Kunlun Extension for PyTorch"
#~ "torch-xpu每 3 个月会在 "
#~ "[PyPi](https://pypi.org/project/torch-xpu) "
#~ "上发布一个稳定版本,每个月发布一个开发版本(即 POC 版本),每天发布一个 nightly "
#~ "版本。PyPi 上的稳定版本**可以**用于 vLLM Kunlun "
#~ "的正式版本,月度开发版本**只能**用于 vLLM Kunlun 的 "
#~ "RC候选发布版本以便快速迭代nightly 版本**不能**用于 vLLM Kunlun "
#~ "的任何版本和分支。"