[Performance] Change the shape of kv_cache to avoid view of k_cache and v_cache. (#204)

This PR changes the shape of kv cache to avoid the view of k_cache and
v_cache.
What's more, cache the metadata of k_cache and v_cache to avoid
duplicative slice operations to improve performance.

Signed-off-by: hw_whx <wanghexiang7@huawei.com>
This commit is contained in:
whx
2025-03-05 10:51:07 +08:00
committed by GitHub
parent 562fa673e5
commit 0d3463400a
3 changed files with 28 additions and 23 deletions

View File

@@ -266,6 +266,12 @@ class NPUWorker(LocalOrDistributedWorkerBase):
self.parallel_config, self.device_config)
for _ in range(self.parallel_config.pipeline_parallel_size)
]
import torch_npu
for ve in range(self.parallel_config.pipeline_parallel_size):
num_layers = len(self.cache_engine[ve].gpu_cache)
for i in range(num_layers):
torch_npu.npu_format_cast(self.cache_engine[ve].gpu_cache[i],
2)
self.gpu_cache = [
self.cache_engine[ve].gpu_cache
for ve in range(self.parallel_config.pipeline_parallel_size)