[v0.18.0][Doc] Translated Doc files 2026-04-14 (#8257)

## Auto-Translation Summary

Translated **102** file(s):

-
<code>docs/source/locale/zh_CN/LC_MESSAGES/community/contributors.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/community/governance.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/community/user_stories/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/community/user_stories/llamafactory.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/community/versioning_policy.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/patch.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/contribution/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/contribution/testing.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/evaluation/using_evalscope.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/evaluation/using_lm_eval.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/evaluation/using_opencompass.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/performance_and_debug/msprobe_guide.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/performance_and_debug/performance_benchmark.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/performance_and_debug/service_profiling_guide.po</code>
- <code>docs/source/locale/zh_CN/LC_MESSAGES/faqs.po</code>
- <code>docs/source/locale/zh_CN/LC_MESSAGES/index.po</code>
- <code>docs/source/locale/zh_CN/LC_MESSAGES/installation.po</code>
- <code>docs/source/locale/zh_CN/LC_MESSAGES/quick_start.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/configuration/additional_config.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/graph_mode.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/lora.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/quantization.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/sleep_mode.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/structured_output.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/release_notes.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/support_matrix/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/support_matrix/supported_features.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/support_matrix/supported_models.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/ACL_Graph.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/KV_Cache_Pool_Guide.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/ModelRunner_prepare_inputs.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/add_custom_aclnn_op.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/context_parallel.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/cpu_binding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/disaggregated_prefill.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/eplb_swift_balancer.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/npugraph_ex.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/Design_Documents/quantization.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/contribution/multi_node_test.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/evaluation/using_ais_bench.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/developer_guide/performance_and_debug/optimization_and_tuning.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/long_sequence_context_parallel_multi_node.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/long_sequence_context_parallel_single_node.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/pd_colocated_mooncake_multi_instance.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/pd_disaggregation_mooncake_multi_node.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/pd_disaggregation_mooncake_single_node.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/ray.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/features/suffix_speculative_decoding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/hardwares/310p.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/hardwares/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/DeepSeek-R1.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/DeepSeek-V3.1.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/DeepSeek-V3.2.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/GLM4.x.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/GLM5.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Kimi-K2-Thinking.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Kimi-K2.5.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/MiniMax-M2.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/PaddleOCR-VL.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen-VL-Dense.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen2.5-7B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen2.5-Omni.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-235B-A22B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-30B-A3B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-32B-W4A4.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-8B-W4A8.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-Coder-30B-A3B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-Dense.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-Next.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-Omni-30B-A3B-Thinking.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-VL-235B-A22B-Instruct.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-VL-30B-A3B-Instruct.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-VL-Embedding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3-VL-Reranker.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3.5-27B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3.5-397B-A17B.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3_embedding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/Qwen3_reranker.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/tutorials/models/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/deployment_guide/index.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/deployment_guide/using_volcano_kthena.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/Fine_grained_TP.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/Multi_Token_Prediction.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/batch_invariance.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/context_parallel.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/cpu_binding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/dynamic_batch.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/epd_disaggregation.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/eplb_swift_balancer.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/external_dp.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/kv_pool.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/large_scale_ep.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/layer_sharding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/lmcache_ascend_deployment.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/netloader.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/npugraph_ex.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/rfork.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/sequence_parallelism.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/speculative_decoding.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/ucm_deployment.po</code>
-
<code>docs/source/locale/zh_CN/LC_MESSAGES/user_guide/feature_guide/weight_prefetch.po</code>

---

[Workflow
run](https://github.com/vllm-project/vllm-ascend/actions/runs/24390263284)

Signed-off-by: vllm-ascend-ci <vllm-ascend-ci@users.noreply.github.com>
Co-authored-by: vllm-ascend-ci <vllm-ascend-ci@users.noreply.github.com>
This commit is contained in:
vllm-ascend-ci
2026-04-15 15:27:09 +08:00
committed by GitHub
parent b6aa5bbdbf
commit 147b589f62
102 changed files with 41760 additions and 6023 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -4,201 +4,197 @@
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-ascend\n"
"Project-Id-Version: vllm-ascend\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"POT-Creation-Date: 2026-04-14 09:08+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
"Generated-By: Babel 2.18.0\n"
#: ../../community/governance.md:1
#: ../../source/community/governance.md:1
msgid "Governance"
msgstr "治理"
#: ../../community/governance.md:3
#: ../../source/community/governance.md:3
msgid "Mission"
msgstr "使命"
#: ../../community/governance.md:4
#: ../../source/community/governance.md:5
msgid ""
"As a vital component of vLLM, the vLLM Ascend project is dedicated to "
"providing an easy, fast, and cheap LLM Serving for Everyone on Ascend NPU, "
"and to actively contribute to the enrichment of vLLM."
"providing an easy, fast, and cheap LLM Serving for everyone on Ascend "
"NPUs and to actively contributing to the enrichment of vLLM."
msgstr ""
"作为 vLLM 的重要组成部分vLLM Ascend 项目致力于为所有人在 Ascend NPU 上提供简单、快速且低成本的大语言模型服务,并积极促进"
" vLLM 的丰富发展。"
"作为 vLLM 的重要组成部分vLLM Ascend 项目致力于为所有人在昇腾 NPU "
"上提供简单、快速且低成本的大语言模型服务,并积极为丰富 vLLM 生态系统做出贡献。"
#: ../../community/governance.md:6
#: ../../source/community/governance.md:7
msgid "Principles"
msgstr "原则"
#: ../../community/governance.md:7
#: ../../source/community/governance.md:9
msgid ""
"vLLM Ascend follows the vLLM community's code of conduct[vLLM - CODE OF "
"CONDUCT](https://github.com/vllm-project/vllm/blob/main/CODE_OF_CONDUCT.md)"
"vLLM Ascend follows the vLLM community's code of conduct: [vLLM - CODE OF"
" CONDUCT](https://github.com/vllm-"
"project/vllm/blob/main/CODE_OF_CONDUCT.md)"
msgstr ""
"vLLM Ascend 遵循 vLLM 社区的行为准则:[vLLM - 行为准则](https://github.com/vllm-"
"project/vllm/blob/main/CODE_OF_CONDUCT.md)"
#: ../../community/governance.md:9
#: ../../source/community/governance.md:11
msgid "Governance - Mechanics"
msgstr "治理 - 机制"
#: ../../community/governance.md:10
#: ../../source/community/governance.md:13
msgid ""
"vLLM Ascend is an open-source project under the vLLM community, where the "
"authority to appoint roles is ultimately determined by the vLLM community. "
"It adopts a hierarchical technical governance structure."
"vLLM Ascend is an open-source project under the vLLM community, where the"
" authority to appoint roles is ultimately determined by the vLLM "
"community. It adopts a hierarchical technical governance structure."
msgstr "vLLM Ascend 是 vLLM 社区下的一个开源项目,其角色任命权最终由 vLLM 社区决定。它采用分层的技术治理结构。"
#: ../../community/governance.md:12
#: ../../source/community/governance.md:15
msgid "Contributor:"
msgstr "贡献者:"
#: ../../community/governance.md:14
#: ../../source/community/governance.md:17
msgid ""
"**Responsibility:** Help new contributors on boarding, handle and respond to"
" community questions, review RFCs, code"
msgstr "**职责:** 帮助新贡献者加入处理和回复社区问题审查RFC和代码"
"**Responsibility:** Help new contributors onboarding, handle and respond "
"to community questions, review RFCs and code."
msgstr "**职责:** 帮助新贡献者加入,处理和回复社区问题,审查 RFC 和代码"
#: ../../community/governance.md:16
#: ../../source/community/governance.md:19
msgid ""
"**Requirements:** Complete at least 1 contribution. Contributor is someone "
"who consistently and actively participates in a project, included but not "
"limited to issue/review/commits/community involvement."
msgstr "**要求:** 完成至少1次贡献。贡献者是指持续且积极参与项目的人,包括但不限于问题、评审、提交和社区参与。"
"**Requirements:** Complete at least 1 contribution. A contributor is "
"someone who consistently and actively participates in a project, "
"including but not limited to issue/review/commits/community involvement."
msgstr "**要求:** 完成至少 1 次贡献。贡献者是指持续且积极参与项目的人,包括但不限于提交问题、进行评审、提交代码和参与社区活动。"
#: ../../community/governance.md:18
#: ../../source/community/governance.md:21
msgid ""
"Contributors will be empowered [vllm-project/vllm-"
"ascend](https://github.com/vllm-project/vllm-ascend) Github repo `Triage` "
"permissions (`Can read and clone this repository. Can also manage issues and"
" pull requests`) to help community developers collaborate more efficiently."
"The contributor permissions are granted by the [vllm-project/vllm-"
"ascend](https://github.com/vllm-project/vllm-ascend)'s repo `Triage` on "
"GitHub, including repo read and clone, issue and PR management, "
"facilitating efficient collaboration between community developers."
msgstr ""
"贡献者将被予 [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-"
"ascend) Github 仓库的 `Triage` 权限(`可读取和克隆此仓库。还可以管理问题和拉取请求`),以帮助社区开发者更加高效协作。"
"贡献者将被予 [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-"
"ascend) GitHub 仓库的 `Triage` 权限(包括仓库读取和克隆问题和拉取请求管理),以促进社区开发者之间的高效协作。"
#: ../../community/governance.md:20
#: ../../source/community/governance.md:23
msgid "Maintainer:"
msgstr "维护者:"
#: ../../community/governance.md:22
#: ../../source/community/governance.md:25
msgid ""
"**Responsibility:** Develop the project's vision and mission. Maintainers "
"are responsible for driving the technical direction of the entire project "
"and ensuring its overall success, possessing code merge permissions. They "
"formulate the roadmap, review contributions from community members, "
"continuously contribute code, and actively engage in community activities "
"(such as regular meetings/events)."
"**Responsibility:** Develop the project's vision and mission. Maintainers"
" are responsible for shaping the technical direction of the project and "
"ensuring its long-term success. With code merge permissions, they lead "
"roadmap planning, review community contributions, make ongoing code "
"improvements, and actively participate in community engagement—such as "
"regular meetings and events."
msgstr ""
"**责** "
"制定项目的愿景和使命。维护者负责引领整个项目的技术方向并确保其整体成功,拥有代码合并权限。他们制定路线图,审核社区成员的贡献,持续贡献代码,并积极参与社区活动(如定期会议/活动)。"
"**责:** "
"制定项目的愿景和使命。维护者负责引领项目的技术方向并确保其长期成功,拥有代码合并权限。他们制定路线图,审核社区贡献,持续改进代码,并积极参与社区活动(如定期会议活动)。"
#: ../../community/governance.md:24
#: ../../source/community/governance.md:27
msgid ""
"**Requirements:** Deep understanding of vLLM and vLLM Ascend codebases, "
"with a commitment to sustained code contributions. Competency in "
"design/development/PR review workflows."
msgstr ""
"**要求:** 深入理解 vLLMvLLM Ascend 代码库,并承诺持续贡献代码。具备 ‌设计/开发/PR 审核流程‌ 的能力。"
"**Requirements:** Deep understanding of vLLM and vLLM Ascend code "
"bases, with a commitment to sustained code contributions and competency "
"in design, development, and PR review workflows."
msgstr "**要求:** 深入理解 vLLMvLLM Ascend 代码库,承诺持续贡献代码,并具备 ‌设计、开发和 PR 审核工作流‌ 的能力。"
#: ../../community/governance.md:25
#: ../../source/community/governance.md:29
msgid ""
"**Review Quality:** Actively participate in community code reviews, "
"**Review quality:** Actively participate in community code reviews, "
"ensuring high-quality code integration."
msgstr "**评审质量:** 积极参与社区代码评审,确保高质量的代码集成。"
#: ../../community/governance.md:26
#: ../../source/community/governance.md:30
msgid ""
"**Quality Contribution:** Successfully develop and deliver at least one "
"**Quality contribution:** Successfully develop and deliver at least one "
"major feature while maintaining consistent high-quality contributions."
msgstr "**质量贡献** 成功开发并交付至少一个主要功能,同时持续保持高质量贡献。"
msgstr "**质量贡献:** 成功开发并交付至少一个主要功能,同时保持持续的高质量贡献。"
#: ../../community/governance.md:27
#: ../../source/community/governance.md:31
msgid ""
"**Community Involvement:** Actively address issues, respond to forum "
"inquiries, participate in discussions, and engage in community-driven tasks."
msgstr "**社区参与:** 积极解决问题,回复论坛询问,参与讨论,并参与社区驱动的任务。"
"**Community involvement:** Actively address issues, respond to forum "
"inquiries, participate in discussions, and engage in community-driven "
"tasks."
msgstr "**社区参与:** 积极解决问题,回复论坛询问,参与讨论,并投身于社区驱动的任务。"
#: ../../community/governance.md:29
#: ../../source/community/governance.md:33
msgid ""
"Requires approval from existing Maintainers. The vLLM community has the "
"final decision-making authority."
msgstr "需要现有维护者的批准。vLLM社区拥有最终决策权。"
#: ../../community/governance.md:31
msgid ""
"Maintainer will be empowered [vllm-project/vllm-"
"ascend](https://github.com/vllm-project/vllm-ascend) Github repo write "
"permissions (`Can read, clone, and push to this repository. Can also manage "
"issues and pull requests`)."
"The approval from existing Maintainers is required. The vLLM community "
"has the final decision-making authority. Maintainers will be granted "
"write access to the [vllm-project/vllm-ascend](https://github.com/vllm-"
"project/vllm-ascend) GitHub repo. This includes permission to read, "
"clone, and push to the repository, as well as manage issues and pull "
"requests."
msgstr ""
"维护者将被授予 [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-"
"ascend) Github 仓库的写入权限`可以读取、克隆和推送到此仓库。还可以管理问题和拉取请求`。"
"需要获得现有维护者的批准。vLLM 社区拥有最终决策权。维护者将被授予 [vllm-project/vllm-"
"ascend](https://github.com/vllm-project/vllm-ascend) GitHub 仓库的写入权限。这包括读取、克隆和推送仓库的权限,以及管理问题和拉取请求的权限。"
#: ../../community/governance.md:33
#: ../../source/community/governance.md:36
msgid "Nominating and Removing Maintainers"
msgstr "提名和移除维护者"
#: ../../community/governance.md:35
#: ../../source/community/governance.md:38
msgid "The Principles"
msgstr "原则"
#: ../../community/governance.md:37
#: ../../source/community/governance.md:40
msgid ""
"Membership in vLLM Ascend is given to individuals on merit basis after they "
"demonstrated strong expertise of the vLLM / vLLM Ascend through "
"contributions, reviews and discussions."
"Membership in vLLM Ascend is given to individuals on a merit basis after "
"they demonstrate their strong expertise in vLLM/vLLM Ascend through "
"contributions, reviews, and discussions."
msgstr ""
"vLLM Ascend 的成员资格是基于个人能力授予的,只有在通过贡献、评审和讨论展示出对 vLLM / vLLM Ascend "
"的深厚专业知识后,才可获得。"
#: ../../community/governance.md:39
#: ../../source/community/governance.md:42
msgid ""
"For membership in the maintainer group the individual has to demonstrate "
"strong and continued alignment with the overall vLLM / vLLM Ascend "
"For membership in the maintainer group, individuals have to demonstrate "
"strong and continued alignment with the overall vLLM/vLLM Ascend "
"principles."
msgstr "要成为维护者组成员,个人必须表现出与 vLLM / vLLM Ascend 总体原则的高度一致并持续支持。"
#: ../../community/governance.md:41
#: ../../source/community/governance.md:44
msgid ""
"Light criteria of moving module maintenance to emeritus status if they "
"dont actively participate over long periods of time."
msgstr "如果模块维护人员在长时间内没有积极参与,可根据宽松的标准将其维护状态转为“荣誉”状态。"
"Maintainers who have been inactive for a long time may be transitioned to"
" **emeritus** status under lenient criteria."
msgstr "长期不活跃的维护者,可根据宽松的标准转为 **荣誉** 状态。"
#: ../../community/governance.md:43
#: ../../source/community/governance.md:46
msgid "The membership is for an individual, not a company."
msgstr "该员资格属于个人,而非公司。"
msgstr "该员资格属于个人,而非公司。"
#: ../../community/governance.md:45
#: ../../source/community/governance.md:48
msgid "Nomination and Removal"
msgstr "提名与罢免"
#: ../../community/governance.md:47
#: ../../source/community/governance.md:50
msgid ""
"Nomination: Anyone can nominate someone to become a maintainer (include "
"self-nominate). All existing maintainers are responsible for evaluating the "
"nomination. The nominator should provide nominee's info around the strength "
"of the candidate to be a maintainer, include but not limited to review "
"quality, quality contribution, community involvement."
msgstr ""
"提名:任何人都可以提名人成为维护者(包括自荐)。所有现有维护者都有责任评估提名。提名人应提供被提名人成为维护者的相关优势信息,包括但不限于评审质量、质贡献社区参与。"
"Nomination: Anyone can nominate a candidate to become a maintainer, "
"including self-nominations. All existing maintainers are responsible for "
"reviewing and evaluating each nomination. The nominator should provide "
"relevant information about the nominee's qualifications—such as review "
"quality, quality contribution, and community involvement—among other "
"strengths."
msgstr "提名:任何人都可以提名候选人成为维护者(包括自荐)。所有现有维护者都有责任审查和评估每项提名。提名人应提供被提名人的相关资格信息,例如评审质量、质贡献社区参与度等优势。"
#: ../../community/governance.md:48
#: ../../source/community/governance.md:51
msgid ""
"Removal: Anyone can nominate a person to be removed from maintainer position"
" (include self-nominate). All existing maintainers are responsible for "
"evaluating the nomination. The nominator should provide nominee's info, "
"include but not limited to lack of activity, conflict with the overall "
"direction and other information that makes them unfit to be a maintainer."
msgstr ""
"移除:任何人都可以提名某人被移出维护者职位(包括自荐)。所有现维护者都有责任评估该提名。提名应提供被提名人的相关信息,包括但不限于缺乏活动、与整体方向冲突以及使其不适合作为维护者的其他信息。"
"Removal: Anyone may nominate an individual for removal from the "
"maintainer role, including self-nominations. All current maintainers are "
"responsible for reviewing and evaluating such nominations. The nominator "
"should provide relevant information about the nominee—such as prolonged "
"inactivity, misalignment with the project's overall direction, or other "
"factors that may render them unsuitable for the maintainer position."
msgstr "移除:任何人都可以提名某人从维护者角色中移除(包括自荐)。所有现维护者都有责任审查和评估此类提名。提名应提供被提名人的相关信息,例如长期不活跃、与项目整体方向不一致,或其他可能使其不适合担任维护者职位的因素。"

View File

@@ -4,100 +4,98 @@
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-ascend\n"
"Project-Id-Version: vllm-ascend\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"POT-Creation-Date: 2026-04-14 09:08+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
"Generated-By: Babel 2.18.0\n"
#: ../../community/user_stories/index.md:15
#: ../../source/community/user_stories/index.md:15
msgid "More details"
msgstr "更多细节"
msgstr "更多详情"
#: ../../community/user_stories/index.md:1
#: ../../source/community/user_stories/index.md:1
msgid "User Stories"
msgstr "用户故事"
msgstr "用户案例"
#: ../../community/user_stories/index.md:3
#: ../../source/community/user_stories/index.md:3
msgid ""
"Read case studies on how users and developers solves real, everyday problems"
" with vLLM Ascend"
msgstr "阅读案例研究,了解用户和开发者如何使用 vLLM Ascend 解决实际日常问题。"
"Read case studies on how users and developers solve real, everyday "
"problems with vLLM Ascend"
msgstr "阅读案例研究,了解用户和开发者如何用 vLLM Ascend 解决实际日常问题。"
#: ../../community/user_stories/index.md:5
#: ../../source/community/user_stories/index.md:5
msgid ""
"[LLaMA-Factory](./llamafactory.md) is an easy-to-use and efficient platform "
"for training and fine-tuning large language models, it supports vLLM Ascend "
"to speed up inference since [LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739), gain 2x "
"performance enhancement of inference."
"[LLaMA-Factory](./llamafactory.md) is an easy-to-use and efficient "
"platform for training and fine-tuning large language models. It supports "
"vLLM Ascend to speed up inference since [LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739), "
"gaining 2x performance enhancement in inference."
msgstr ""
"[LLaMA-Factory](./llamafactory.md) 是一个易于使用且高效的大语言模型训练与微调平台[LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739) 起支持 vLLM "
"Ascend 加速推理,推理性能提升 2 倍。"
"[LLaMA-Factory](./llamafactory.md) 是一个易于使用且高效的大语言模型训练与微调平台自 "
"[LLaMA-Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739) 起支持 "
"vLLM Ascend 加速推理,推理性能提升 2 倍。"
#: ../../community/user_stories/index.md:7
#: ../../source/community/user_stories/index.md:7
msgid ""
"[Huggingface/trl](https://github.com/huggingface/trl) is a cutting-edge "
"library designed for post-training foundation models using advanced "
"techniques like SFT, PPO and DPO, it uses vLLM Ascend since "
"techniques like SFT, PPO and DPO. It uses vLLM Ascend since "
"[v0.17.0](https://github.com/huggingface/trl/releases/tag/v0.17.0) to "
"support RLHF on Ascend NPU."
"support RLHF on Ascend NPUs."
msgstr ""
"[Huggingface/trl](https://github.com/huggingface/trl) 是一个前沿的库,专为使用 SFT、PPO 和"
" DPO 等先进技术对基础模型进行后训练而设计。 "
"[v0.17.0](https://github.com/huggingface/trl/releases/tag/v0.17.0) 版本开始,该库利用"
" vLLM Ascend 支持在 Ascend NPU 上进行 RLHF。"
"[Huggingface/trl](https://github.com/huggingface/trl) 是一个前沿的库,专为使用 SFT、PPO 和 DPO "
"等先进技术对基础模型进行后训练而设计。 "
"[v0.17.0](https://github.com/huggingface/trl/releases/tag/v0.17.0) 起,该库使用 "
"vLLM Ascend 支持在昇腾 NPU 上进行 RLHF。"
#: ../../community/user_stories/index.md:9
#: ../../source/community/user_stories/index.md:9
msgid ""
"[MindIE Turbo](https://pypi.org/project/mindie-turbo) is an LLM inference "
"engine acceleration plug-in library developed by Huawei on Ascend hardware, "
"which includes self-developed large language model optimization algorithms "
"and optimizations related to the inference engine framework. It supports "
"vLLM Ascend since "
"[2.0rc1](https://www.hiascend.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev/mindie-"
"turbo-0001.html)."
"[MindIE Turbo](https://pypi.org/project/mindie-turbo) is an LLM inference"
" engine acceleration plugin library developed by Huawei on Ascend "
"hardware, which includes self-developed LLM optimization algorithms and "
"optimizations related to the inference engine framework. It supports vLLM"
" Ascend since "
"[2.0rc1](https://www.hiascend.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev"
"/mindie-turbo-0001.html)."
msgstr ""
"[MindIE Turbo](https://pypi.org/project/mindie-turbo) "
"是华为在昇腾硬件上开发的一款用于加速LLM推理引擎的插件库,包含自主研发的大语言模型优化算法及与推理引擎框架相关的优化。 "
"[2.0rc1](https://www.hiascend.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev/mindie-"
"turbo-0001.html) 起,支持 vLLM Ascend。"
"是华为在昇腾硬件上开发的一款用于加速大语言模型推理引擎的插件库,包含自主研发的大语言模型优化算法及与推理引擎框架相关的优化。 "
"[2.0rc1](https://www.hiascend.com/document/detail/zh/mindie/20RC1/AcceleratePlugin/turbodev"
"/mindie-turbo-0001.html) 起,支持 vLLM Ascend。"
#: ../../community/user_stories/index.md:11
#: ../../source/community/user_stories/index.md:11
msgid ""
"[GPUStack](https://github.com/gpustack/gpustack) is an open-source GPU "
"cluster manager for running AI models. It supports vLLM Ascend since "
"[v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2), see more"
" GPUStack performance evaluation info on "
"[link](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)."
"[v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2). See "
"more GPUStack performance evaluation information at [this "
"link](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)."
msgstr ""
"[GPUStack](https://github.com/gpustack/gpustack) 是一个开源的 GPU 集群管理器,用于运行 AI "
"模型。从 [v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2) "
"版本开始支持 vLLM Ascend更多 GPUStack 性能评测信息见 "
"[链接](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)。"
"[GPUStack](https://github.com/gpustack/gpustack) 是一个开源的 GPU 集群管理器,用于运行 AI 模型。自 "
"[v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2) 起支持 vLLM "
"Ascend更多 GPUStack 性能评测信息请参见 "
"[链接](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew)。"
#: ../../community/user_stories/index.md:13
#: ../../source/community/user_stories/index.md:13
msgid ""
"[verl](https://github.com/volcengine/verl) is a flexible, efficient and "
"production-ready RL training library for large language models (LLMs), uses "
"vLLM Ascend since "
"[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0), see more "
"info on [verl x Ascend "
"Quickstart](https://verl.readthedocs.io/en/latest/ascend_tutorial/ascend_quick_start.html)."
"[verl](https://github.com/volcengine/verl) is a flexible, efficient, and "
"production-ready RL training library for LLMs. It uses vLLM Ascend since "
"[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0). See "
"more information on [verl x Ascend "
"Quickstart](https://verl.readthedocs.io/en/latest/ascend_tutorial/quick_start/ascend_quick_start.html)."
msgstr ""
"[verl](https://github.com/volcengine/verl) "
"是一个灵活、高效且可用于生产环境的大语言模型LLM强化学习训练库自 "
"[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0) 起支持 vLLM "
"Ascend更多信息请参见 [verl x Ascend "
"快速上手](https://verl.readthedocs.io/en/latest/ascend_tutorial/ascend_quick_start.html)。"
"是一个灵活、高效且可用于生产环境的大语言模型强化学习训练库自 "
"[v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0) 起,该库使用 "
"vLLM Ascend更多信息请参见 [verl x Ascend "
"快速入门](https://verl.readthedocs.io/en/latest/ascend_tutorial/quick_start/ascend_quick_start.html)。"

View File

@@ -4,84 +4,76 @@
# package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2025.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: vllm-ascend\n"
"Project-Id-Version: vllm-ascend\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-07-18 09:01+0800\n"
"POT-Creation-Date: 2026-04-14 09:08+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Language: zh_CN\n"
"Language-Team: zh_CN <LL@li.org>\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.17.0\n"
"Generated-By: Babel 2.18.0\n"
#: ../../community/user_stories/llamafactory.md:1
#: ../../source/community/user_stories/llamafactory.md:1
msgid "LLaMA-Factory"
msgstr "LLaMA-Factory"
#: ../../community/user_stories/llamafactory.md:3
msgid "**About / Introduction**"
msgstr "**关于 / 介绍**"
#: ../../source/community/user_stories/llamafactory.md:3
msgid "**Introduction**"
msgstr "**简介**"
#: ../../community/user_stories/llamafactory.md:5
#: ../../source/community/user_stories/llamafactory.md:5
msgid ""
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) is an easy-to-use "
"and efficient platform for training and fine-tuning large language models. "
"With LLaMA-Factory, you can fine-tune hundreds of pre-trained models locally"
" without writing any code."
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) is an easy-to-"
"use and efficient platform for training and fine-tuning large language "
"models. With LLaMA-Factory, you can fine-tune hundreds of pre-trained "
"models locally without writing any code."
msgstr ""
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) "
"是一个易于使用且高效的平台,用于训练和微调大型语言模型。有了 LLaMA-Factory你可以在本地对数百个预训练模型进行微调无需编写任何代码。"
"[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 是一个易于使用且高效的平台,用于训练和微调大型语言模型。通过 LLaMA-Factory您可以在本地对数百个预训练模型进行微调无需编写任何代码。"
#: ../../community/user_stories/llamafactory.md:7
#: ../../source/community/user_stories/llamafactory.md:7
msgid ""
"LLaMA-Facotory users need to evaluate and inference the model after fine-"
"tuning the model."
msgstr "LLaMA-Facotory 用户需要在对模型进行微调后对模型进行评估和推理。"
"LLaMA-Factory users need to evaluate the model and perform inference "
"after fine-tuning."
msgstr "LLaMA-Factory 用户在完成微调后,需要对模型进行评估和推理。"
#: ../../community/user_stories/llamafactory.md:9
msgid "**The Business Challenge**"
#: ../../source/community/user_stories/llamafactory.md:9
msgid "**Business challenge**"
msgstr "**业务挑战**"
#: ../../community/user_stories/llamafactory.md:11
#: ../../source/community/user_stories/llamafactory.md:11
msgid ""
"LLaMA-Factory used transformers to perform inference on Ascend NPU, but the "
"speed was slow."
msgstr "LLaMA-Factory 使用 transformers 在 Ascend NPU 上进行推理,但速度较慢。"
"LLaMA-Factory uses Transformers to perform inference on Ascend NPUs, but "
"the speed is slow."
msgstr "LLaMA-Factory 使用 Transformers 在昇腾 NPU 上进行推理,但速度较慢。"
#: ../../community/user_stories/llamafactory.md:13
msgid "**Solving Challenges and Benefits with vLLM Ascend**"
msgstr "**通过 vLLM Ascend 解决挑战与收益**"
#: ../../source/community/user_stories/llamafactory.md:13
msgid "**Benefits with vLLM Ascend**"
msgstr "**vLLM Ascend 带来的优势**"
#: ../../community/user_stories/llamafactory.md:15
#: ../../source/community/user_stories/llamafactory.md:15
msgid ""
"With the joint efforts of LLaMA-Factory and vLLM Ascend ([LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739)), the "
"performance of LLaMA-Factory in the model inference stage has been "
"significantly improved. According to the test results, the inference speed "
"of LLaMA-Factory has been increased to 2x compared to the transformers "
"version."
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739)), "
"LLaMA-Factory has achieved significant performance gains during model "
"inference. Benchmark results show that its inference speed is now up to "
"2× faster compared to the Transformers implementation."
msgstr ""
" LLaMA-Factory vLLM Ascend 的共同努力下(参见 [LLaMA-"
"Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739)LLaMA-"
"Factory 在模型推理阶段的性能得到了显著提升。根据测试结果LLaMA-Factory 的推理速度相比 transformers 版本提升到了 2"
" 倍。"
"通过 LLaMA-Factory vLLM Ascend 的共同努力[LLaMA-Factory#7739](https://github.com/hiyouga/LLaMA-Factory/pull/7739)LLaMA-Factory 在模型推理阶段实现了显著的性能提升。基准测试结果表明,其推理速度相比 Transformers 实现最高提升了 2 倍。"
#: ../../community/user_stories/llamafactory.md:17
#: ../../source/community/user_stories/llamafactory.md:17
msgid "**Learn more**"
msgstr "**了解更多**"
#: ../../community/user_stories/llamafactory.md:19
#: ../../source/community/user_stories/llamafactory.md:19
msgid ""
"See more about LLaMA-Factory and how it uses vLLM Ascend for inference on "
"the Ascend NPU in the following documentation: [LLaMA-Factory Ascend NPU "
"See more details about LLaMA-Factory and how it uses vLLM Ascend for "
"inference on Ascend NPUs in [LLaMA-Factory Ascend NPU "
"Inference](https://llamafactory.readthedocs.io/en/latest/advanced/npu_inference.html)."
msgstr ""
"在以下文档中查看更多关于 LLaMA-Factory 以及如何在 Ascend NPU 上使用 vLLM Ascend 进行推理的信息:[LLaMA-"
"Factory Ascend NPU "
"推理](https://llamafactory.readthedocs.io/en/latest/advanced/npu_inference.html)。"
"有关 LLaMA-Factory 的更多详情以及如何在昇腾 NPU 上使用 vLLM Ascend 进行推理,请参阅 [LLaMA-Factory 昇腾 NPU 推理](https://llamafactory.readthedocs.io/en/latest/advanced/npu_inference.html)。"