[main][Docs] Fix spelling errors across documentation (#6649)
Fix various spelling mistakes in the project documentation to improve
clarity and correctness.
- vLLM version: v0.15.0
- vLLM main:
d7e17aaacd
---------
Signed-off-by: SlightwindSec <slightwindsec@gmail.com>
This commit is contained in:
@@ -282,7 +282,7 @@ msgstr "11. 如何运行 w8a8 DeepSeek 模型?"
|
||||
|
||||
#: ../../faqs.md:87
|
||||
msgid ""
|
||||
"Please following the [inferencing tutorail](https://vllm-"
|
||||
"Please following the [inferencing tutorial](https://vllm-"
|
||||
"ascend.readthedocs.io/en/latest/tutorials/multi_node.html) and replace model"
|
||||
" to DeepSeek."
|
||||
msgstr ""
|
||||
@@ -459,7 +459,7 @@ msgstr ""
|
||||
"install` 进行安装,或者使用 `python setup.py clean` 清除缓存。"
|
||||
|
||||
#: ../../faqs.md:130
|
||||
msgid "18. How to generate determinitic results when using vllm-ascend?"
|
||||
msgid "18. How to generate deterministic results when using vllm-ascend?"
|
||||
msgstr "18. 使用 vllm-ascend 时如何生成确定性结果?"
|
||||
|
||||
#: ../../faqs.md:131
|
||||
@@ -475,5 +475,5 @@ msgstr ""
|
||||
"sample)**,例如:"
|
||||
|
||||
#: ../../faqs.md:158
|
||||
msgid "Set the following enveriments parameters:"
|
||||
msgid "Set the following environment parameters:"
|
||||
msgstr "设置以下环境参数:"
|
||||
|
||||
@@ -695,7 +695,7 @@ msgstr ""
|
||||
#: ../../user_guide/release_notes.md:118
|
||||
msgid ""
|
||||
"Qwen3 and Qwen3MOE is supported now. The performance and accuracy of Qwen3 "
|
||||
"is well tested. You can try it now. Mindie Turbo is recomanded to improve "
|
||||
"is well tested. You can try it now. Mindie Turbo is recommended to improve "
|
||||
"the performance of Qwen3. [#903](https://github.com/vllm-project/vllm-"
|
||||
"ascend/pull/903) [#915](https://github.com/vllm-project/vllm-ascend/"
|
||||
"pull/915)"
|
||||
@@ -1204,7 +1204,7 @@ msgid ""
|
||||
"visit [official guide](https://docs.vllm.ai/en/latest/getting_started/"
|
||||
"v1_user_guide.html) to get more detail. By default, vLLM will fallback to "
|
||||
"V0 if V1 doesn't work, please set `VLLM_USE_V1=1` environment if you want "
|
||||
"to use V1 forcely."
|
||||
"to use V1 forcibly."
|
||||
msgstr ""
|
||||
"本版本包含了对 vLLM V1 引擎的实验性支持。你可以访问[官方指南](https://docs."
|
||||
"vllm.ai/en/latest/getting_started/v1_user_guide.html)获取更多详细信息。默认"
|
||||
@@ -1335,7 +1335,7 @@ msgstr ""
|
||||
|
||||
#: ../../user_guide/release_notes.md:232
|
||||
msgid ""
|
||||
"Add Ascend Custom Ops framewrok. Developers now can write customs ops using "
|
||||
"Add Ascend Custom Ops framework. Developers now can write customs ops using "
|
||||
"AscendC. An example ops `rotary_embedding` is added. More tutorials will "
|
||||
"come soon. The Custom Ops compilation is disabled by default when "
|
||||
"installing vllm-ascend. Set `COMPILE_CUSTOM_KERNELS=1` to enable it. [#371]"
|
||||
@@ -1512,7 +1512,7 @@ msgstr ""
|
||||
msgid ""
|
||||
"Improved and reduced the garbled code in model output. But if you still hit "
|
||||
"the issue, try to change the generation config value, such as "
|
||||
"`temperature`, and try again. There is also a knonwn issue shown below. Any "
|
||||
"`temperature`, and try again. There is also a known issue shown below. Any "
|
||||
"[feedback](https://github.com/vllm-project/vllm-ascend/issues/267) is "
|
||||
"welcome. [#277](https://github.com/vllm-project/vllm-ascend/pull/277)"
|
||||
msgstr ""
|
||||
|
||||
Reference in New Issue
Block a user