Submitted by benanne t3_107g3yf in MachineLearning
benanne OP t1_j427zj0 wrote
Reply to comment by chodegoblin69 in [R] Diffusion language models by benanne
One indirect advantage for working with very long sequences is the lack of causality constraint, which makes it very easy to use architectures where computation is largely decoupled from the sequence length, like Perceivers (https://arxiv.org/abs/2103.03206, https://arxiv.org/abs/2107.14795), or Recurrent Interface Networks (https://arxiv.org/abs/2212.11972). This is highly speculative though :)
(I am aware that an autoregressive variant of the Perceiver architecture exists (https://arxiv.org/abs/2202.07765), but it is actually quite a bit less general/flexible than Perceiver IO / the original Perceiver.)
chodegoblin69 t1_j44uao7 wrote
Thank you, I will check those out.
Diffusion’s lack of causality constraint seems like a pretty tall hurdle for tasks with output formats requiring “fluency” (like summarization) though. Kind of like drawing hands early on in stable diffusion (or drawing most anything coherently for earlier models like disco diffusion). Multiple-choice question answering seems like a more natural domain, though certainly doesn’t show off the “expressive” generative abilities. Fluency probably improves significantly with scale and fine-tuning though.
Viewing a single comment thread. View all comments