Viewing a single comment thread. View all comments

prototypist t1_j0c5p2j wrote

There have been attempts this year at building a more human-like decoder for language models and seeing what outputs humans prefer. Transformers supports typical decoding and contrastive search, and there are papers and code out for RankGen, Time Control, and Contrastive Decoding (which is totally different from contrastive search).

3

Emergency_Apricot_77 OP t1_j0fe4lo wrote

Thanks for this ! Typical decoding paper contains really useful information that is similar to what I was looking for

1