[Build] Add installation script of fused_infer_attention_score kernel with flash decoding (#5402)

### What this PR does / why we need it?
Add installation script of `fused_infer_attention_score` kernel with
flash decoding

### Userface changes
Users can install the kernel `fused_infer_attention_score` with flash
decoding feature by `bash
tools/install_flash_infer_attention_score_ops_a2.sh` or `bash
tools/install_flash_infer_attention_score_ops_a3.sh`

- vLLM version: release/v0.13.0
- vLLM main:
254f6b9867
---------
Signed-off-by: MengqingCao <cmq0113@163.com>
This commit is contained in:
Mengqing Cao
2025-12-27 02:01:06 +08:00
committed by GitHub
parent f5af6bbd1e
commit 8ed6f98a5a
3 changed files with 70 additions and 1 deletions

View File

@@ -13,7 +13,7 @@ repos:
args: [
--toml, pyproject.toml,
'--skip', 'csrc/**,tests/prompts/**,./benchmarks/sonnet.txt,*tests/lora/data/**,build/**,./vllm_ascend.egg-info/**,.github/**,typos.toml',
'-L', 'CANN,cann,NNAL,nnal,ASCEND,ascend,EnQue,CopyIn,ArchType,AND,ND'
'-L', 'CANN,cann,NNAL,nnal,ASCEND,ascend,EnQue,CopyIn,ArchType,AND,ND,tbe'
]
additional_dependencies:
- tomli