Adding temperature scaling on Joiner logits: (#789)
* Adding temperature scaling on Joiner logits:
- T hard-coded to 2.0
- so far best result NCE 0.122 (still not so high)
- the BPE scores were rescaled with 0.2 (but then also incorrect words
get high confidence, visually reasonable histograms are for 0.5 scale)
- BPE->WORD score merging done by min(.) function
(tried also prob-product, and also arithmetic, geometric, harmonic mean)
- without temperature scaling (i.e. scale 1.0), the best NCE was 0.032 (here product merging was best)
Results seem consistent with: https://arxiv.org/abs/2110.15222
Everything tuned on a very-small set of 100 sentences with 813 words and 10.2% WER, a Czech model.
I also experimented with blank posteriors mixed into the BPE confidences,
but no NCE improvement found, so not pushing that.
Temperature scling added also to the Greedy search confidences.
* making `temperature_scale` configurable from outside
This commit is contained in:
@@ -15,8 +15,13 @@ namespace sherpa_onnx {
|
||||
class OnlineTransducerGreedySearchDecoder : public OnlineTransducerDecoder {
|
||||
public:
|
||||
OnlineTransducerGreedySearchDecoder(OnlineTransducerModel *model,
|
||||
int32_t unk_id, float blank_penalty)
|
||||
: model_(model), unk_id_(unk_id), blank_penalty_(blank_penalty) {}
|
||||
int32_t unk_id,
|
||||
float blank_penalty,
|
||||
float temperature_scale)
|
||||
: model_(model),
|
||||
unk_id_(unk_id),
|
||||
blank_penalty_(blank_penalty),
|
||||
temperature_scale_(temperature_scale) {}
|
||||
|
||||
OnlineTransducerDecoderResult GetEmptyResult() const override;
|
||||
|
||||
@@ -29,6 +34,7 @@ class OnlineTransducerGreedySearchDecoder : public OnlineTransducerDecoder {
|
||||
OnlineTransducerModel *model_; // Not owned
|
||||
int32_t unk_id_;
|
||||
float blank_penalty_;
|
||||
float temperature_scale_;
|
||||
};
|
||||
|
||||
} // namespace sherpa_onnx
|
||||
|
||||
Reference in New Issue
Block a user