[Core][Misc] Clean up ProfileExecuteDuration (#6461)

### What this PR does / why we need it?
This PR removes the custom `ProfileExecuteDuration` utility and its
usages across the codebase. This utility was used for profiling
execution duration of different stages in the inference process. It is
replaced by the standard `vllm.v1.utils.record_function_or_nullcontext`,
which integrates with PyTorch's profiler.

This change simplifies the code by removing a custom implementation in
favor of an upstream utility, improving maintainability. Associated
documentation and tests for `ProfileExecuteDuration` are also removed.

### Does this PR introduce _any_ user-facing change?
`VLLM_ASCEND_MODEL_EXECUTE_TIME_OBSERVE` env is removed now.

### How was this patch tested?
CI passed. The changes are a cleanup and replacement with a standard
utility. Existing tests cover the functionality. The removed feature had
its own tests which are also removed.

Related RFC: #5304

- vLLM version: v0.14.1
- vLLM main:
dc917cceb8

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2026-02-01 20:06:01 +08:00
committed by GitHub
parent 775fbc4cd2
commit b4aafd4293
10 changed files with 12 additions and 244 deletions

View File

@@ -33,8 +33,6 @@ e2e-singlecard:
estimated_time: 300
- name: tests/e2e/singlecard/test_multistream_overlap_shared_expert.py
estimated_time: 200
- name: tests/e2e/singlecard/test_profile_execute_duration.py
estimated_time: 10
- name: tests/e2e/singlecard/test_quantization.py
estimated_time: 200
- name: tests/e2e/singlecard/test_sampler.py