Commit Graph

2 Commits

Author SHA1 Message Date
csoulnd
97dbcaf919 [BugFix][310P][v0.18.0] Use CPU generator cache for sampling (#8624)
### What this PR does / why we need it?
This PR introduces a caching mechanism for CPU-based `torch.Generator`
objects in the `_random_sample_310p` function to optimize sampling
performance. It includes unit tests for cache persistence and state
recovery. Feedback highlights a critical bug where keying the cache by
batch index instead of generator ID can break RNG reproducibility during
request re-scheduling, and notes a potential memory leak in the global
cache.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Tested via new unit tests in `tests/ut/_310p/sample/test_sampler_310.py`
verifying cache logic and error handling.

---------

Signed-off-by: csoulnd <daidaicurry@foxmail.com>
2026-04-24 09:34:14 +08:00
Shaoxu Cheng
82e17f693a [BugFix][0.18.0][310p] fix post-sampling not working in graph mode on 310p (#8077)
### What this PR does / why we need it?

Enabling temperature in post-processing on 310P devices can cause the
service to stall and eventually hang. We first traced the issue to a
timeout where the temperature-related `div` operator was waiting for
results from a sub-stream. After investigating the preceding operators,
we finally identified the root cause as the `q.exponential_()` operator,
which is not well supported on 310P and triggers an internal issue in
the `add` kernel.

### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
This patch was thoroughly tested locally(accuracy-dataset test and
stress test). It is not easy to design a proper unit test for this case,
and I appreciate your understanding.

Signed-off-by: Tflowers-0129 <2906339855@qq.com>
2026-04-09 16:31:38 +08:00