Viewing a single comment thread. View all comments

ChuckSeven t1_iwqey5x wrote

what is the size of the opt model you are comparing with in that table?

20

Competitive-Rub-1958 t1_iwqmaic wrote

It does need more parameters to compensate (For instance, it has nearly a billion more parameters than GPT-J-6B without substantial performance gains) while losing out on LAMBADA (Ignoring the weighted average as I don't really understand the point of weighing it, since it distorts the metrics).

Its an extremely interesting direction, but I fear as you scale this model the scaling plot might start to flatten out - much like other RNN rewrites/variants. Hope further research is able to pinpoint the underlying issue and fix it. Till then, best of luck to OP! πŸ‘

16

bo_peng OP t1_iwua2xh wrote

RWKV 7B is faster than GPT 6B, and RWKV scales great actually :)

If you check the table, RWKV is better than GPT-neo on everything at 3B (while smaller RWKV lags behind on LAMBADA).

But GPT-J is using rotary and thus quite better than GPT-neo, so I expect RWKV to surpass it at 14B.

Moreover RWKV 3B becomes stronger after trained for more tokens and I am doing it for the 7B model too.

8

CKtalon t1_iwqk0b9 wrote

It’s written in the 2nd column (params)

4