Remove VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION (#4860)

VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION is enabled by default for long
time. Let's remove it now.

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2025-12-10 23:50:18 +08:00
committed by GitHub
parent 37db0844f5
commit 08441baedd
2 changed files with 2 additions and 14 deletions

View File

@@ -92,11 +92,6 @@ env_variables: Dict[str, Callable[[], Any]] = {
"VLLM_ASCEND_KV_CACHE_MEGABYTES_FLOATING_TOLERANCE":
lambda: int(
os.getenv("VLLM_ASCEND_KV_CACHE_MEGABYTES_FLOATING_TOLERANCE", 64)),
# Whether to enable the topk optimization. It's enabled by default. Please set to False if you hit any issue.
# We'll remove this flag in the future once it's stable enough.
"VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION":
lambda: bool(
int(os.getenv("VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION", '1'))),
# Whether to enable mla_pa for deepseek mla decode, this flag will be removed after its available torch_npu is public accessible
# and the mla_pa will be the default path of deepseek decode path.
"VLLM_ASCEND_MLA_PA":