Viewing a single comment thread. View all comments

babua t1_j6khgfr wrote

I don't think it stops there either, streaming architecture probably breaks core assumptions of some speech models. e.g. for STT, when do you "try" to infer the word? for TTS, how do you intonate the sentence correctly if you don't know the second half? You'd have to re-train your entire model for the streaming case and create new data augmentations -- plus you'll probably sacrifice some performance even in the best case because your model simply has to deal with more uncertainty.

3

jiamengial OP t1_j6mbv97 wrote

That's a good point - CTC and attention mechanisms work on the basis that you've got the whole segment of audio

2