Viewing a single comment thread. View all comments

Matthew2229 t1_jduyi8o wrote

It's a memory issue. Since the attention matrix scales quadratically (N^2) with sequence length (N), we simply don't have enough memory for long sequences. Most of the development around transformers/attention has been targeting this specific problem.

2