Submitted by Blutorangensaft t3_10ltyki in MachineLearning
crt09 t1_j6317t4 wrote
Just speaking from gut here but you could go the other way around and get sentences with varying BLEU differences, encode them all and see how distance their latent representations are, this way you wouldnt have to worry about the effect of the validity of the generated sentences which might be a problem with the other way around (I think)
Blutorangensaft OP t1_j632b2s wrote
Using slightly different sentences to be decoded to the same sentence exists as an idea in the form of denoising autoencoders, yes. I plan to use this down the road, but for now I am interested in thinking about measuring performance.
crt09 t1_j633u7c wrote
I think there's miscommunication, it sounds like you think I'm proposing a training method but I'm suggesting how to measure smoothness.
If you have the BLEU distances between input sentences and the distances between their latents, you can see measure how the distances change between the two which I *think* would indicate smoothness. Or you could do some other measurements on the latents to see how smoothly(?) they are distributed? tbh I'm not entirely sure what you mean by smooth, sorry.
If you're looking to measure performance wouldn't that loss for the training method you be mentioned be useful?
Or are you looking for measuring performance on decoding side?
Blutorangensaft OP t1_j6344jf wrote
Ahh, I get you now, my apologies. I'm more interested in the performance on the decoding side indeed, because I want to later generate sentences in that latent space with another neural net and have them decoded to normal tokens.
Viewing a single comment thread. View all comments