Adding temperature scaling on Joiner logits: (#789)

* Adding temperature scaling on Joiner logits:

- T hard-coded to 2.0
- so far best result NCE 0.122 (still not so high)
    - the BPE scores were rescaled with 0.2 (but then also incorrect words
      get high confidence, visually reasonable histograms are for 0.5 scale)
    - BPE->WORD score merging done by min(.) function
      (tried also prob-product, and also arithmetic, geometric, harmonic mean)

- without temperature scaling (i.e. scale 1.0), the best NCE was 0.032 (here product merging was best)

Results seem consistent with: https://arxiv.org/abs/2110.15222

Everything tuned on a very-small set of 100 sentences with 813 words and 10.2% WER, a Czech model.

I also experimented with blank posteriors mixed into the BPE confidences,
but no NCE improvement found, so not pushing that.

Temperature scling added also to the Greedy search confidences.

* making `temperature_scale` configurable from outside
This commit is contained in:
Karel Vesely
2024-04-26 03:44:26 +02:00
committed by GitHub
parent 15772d2150
commit 2e45d327a5
9 changed files with 107 additions and 30 deletions

View File

@@ -144,6 +144,10 @@ void OnlineTransducerGreedySearchDecoder::Decode(
// export the per-token log scores
if (y != 0 && y != unk_id_) {
// apply temperature-scaling
for (int32_t n = 0; n < vocab_size; ++n) {
p_logit[n] /= temperature_scale_;
}
LogSoftmax(p_logit, vocab_size); // renormalize probabilities,
// save time by doing it only for
// emitted symbols