Fix: export-onnx.py(expected all tensors to be on the same device) (#1699)
由于SenseVoiceSmall.from_pretrained()
调用的funasr.auto.auto_model.AutoModel.build_model()默认device是cuda
(在cuda available的环境中)
```py
device = kwargs.get("device", "cuda")
if not torch.cuda.is_available() or kwargs.get("ngpu", 1) == 0:
device = "cpu"
kwargs["batch_size"] = 1
kwargs["device"] = device
```
而export-onnx.py里的tensor默认都是cpu, 导致
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
所以直接在加载model的时候指定cpu
This commit is contained in:
@@ -119,7 +119,7 @@ def display_params(params):
|
||||
|
||||
|
||||
def main():
|
||||
model, params = SenseVoiceSmall.from_pretrained(model="iic/SenseVoiceSmall")
|
||||
model, params = SenseVoiceSmall.from_pretrained(model="iic/SenseVoiceSmall", device="cpu")
|
||||
display_params(params)
|
||||
|
||||
generate_tokens(params)
|
||||
|
||||
Reference in New Issue
Block a user