Viewing a single comment thread. View all comments

masterofn1 t1_jdu8jug wrote

How does a Transformer architecture handle inputs of different lengths? Is the sequence length limit inherent to the model architecture or more because of resource issues like memory?

2

Matthew2229 t1_jduyi8o wrote

It's a memory issue. Since the attention matrix scales quadratically (N^2) with sequence length (N), we simply don't have enough memory for long sequences. Most of the development around transformers/attention has been targeting this specific problem.

2