Viewing a single comment thread. View all comments

JustOneAvailableName t1_izbnfki wrote

> What did he claim that he didn't achieve?

Connections to his work are often vague. Yes, his lab tried something in the same extremely general direction. No, his lab did not show it actually worked or what part of the broad direction they went in actually worked. So I am not gonna cite Fast Weight Programmers when I want to write about transformers. Yes, Fast Weight Programmers also argued there are more ways to handle variable sized input than using RNNs. No, I don't think the idea is special at all. The main point of Attention is all you need was that removing something of the then mainstream architecture made it faster (or larger) to train while keeping the quality. It was the timing that made it special, because it successfully went against mainstream and they made it work, not the idea itself.

5

undefdev t1_izbui6y wrote

> So I am not gonna cite Fast Weight Programmers when I want to write about transformers.

I think you are probably refering to this paper: Linear Transformers Are Secretly Fast Weight Programmers

It seems like they showed that linear transformers are equivalent to fast weight programmers. If linear transformers are relevant to your research, why not cite fast weight programmers? Credit is cheap, right? We can still call them linear transformers.

4

JustOneAvailableName t1_izbzbaq wrote

Because Schmidhuber claiming that transformers are based on his work was a meme for 3-4 years before he actually did that. Like here.

There are hundreds more relevant papers to cite and read about (linear scaling) transformers

2

undefdev t1_izc3tr1 wrote

> Because Schmidhuber claiming that transformers are based on his work was a meme for 3-4 years before he actually did that. Like here.

But why should memes be relevant in science? Not citing someone because there are memes around their person seems kind of arbitrary. If it's just memes, maybe we shouldn't take them too seriously.

7