Submitted by Intelligent_Rough_21 t3_1056c2e in MachineLearning

Out-of-copyright books only of course.

Hi, I was wondering if I could fine tune a GPT3 model to take a book, likely in html, markdown, or plain text, and convert it to SSML. In order to do that, I would need a bunch of SSML files already hand made, and fine tune a model based on them. Then I've got some code to split that up and do formatting: pandoc, csplit, and then I could use aws polly or one of the others to do real good text to speech.

Anyone have a dataset?

References:

1

Comments

You must log in or register to comment.

geneing t1_j3e1573 wrote

I looked for it once years ago, but couldn't find any. I don't think it's needed anymore. Current TTS systems based on neural networks are really good at producing speech with the right intonation from just the text.

1

Intelligent_Rough_21 OP t1_j3enes5 wrote

I don’t think they take into account language context like completion models do. They just say words with limited memory. Hopefully research will unify them somehow.

1

geneing t1_j3exq14 wrote

Having trained multiple TTS models, I disagree. It's actually quite impressive that prosody is quite accurate. Moreover, even homographs are surprisingly accurate (e.g. word "read" is pronounced with the correct tense if it can be deduced from the sentence)

1

Intelligent_Rough_21 OP t1_j3frivp wrote

Ok I’ll admit to only having used neural models not trained them. AWS Polly is incredibly monotoned last I used it.

1

geneing t1_j3g1gwa wrote

Most likely you are using the original Polly method, which is based on gluing together sounds of different phonemes. That produces monotone speech.

Try Google wavenet. It's available through google cloud api just like Polly.

There's a neural version of Polly, but I never tried it.

1

Intelligent_Rough_21 OP t1_j3g2vvp wrote

Yeah I was using neural poly which is equivalent to wavenet. What I discovered is it will always say the same sentence, and usually the same word used in the same way, the same way, regardless of context clues. “My gosh.” Would always render exactly the same way. Really needs paragraph or dialogue driven context, as well as a bit of randomization. In a book where an author has a repetitive goto word or phrase it’s killer.

1

geneing t1_j3hzfpy wrote

I think what you are looking for is called "expressive TTS". There have been a ton of papers in the last couple of years on the topic. Many provide code.

I've had some success with simply preserving the hidden state of the network from one sentence to the next.

SSML may not be expressive enough for your application.

1