Viewing a single comment thread. View all comments

ntaylor- t1_je11vt1 wrote

Fairly sure the "final" gpt4 model is still using a generate function that predicts one token at a time. Just the training was good and complicated via RLHF. After training it's not doing any "complicated operations".

1

was_der_Fall_ist t1_je15397 wrote

You don’t think the neural network, going through hundreds of billions of parameters each time it calculates the next token, is doing anything complicated?

3

ntaylor- t1_je5qtl2 wrote

Nope. It's the same as all neural networks using transformer architecture. Just a big old series of matrix multiplications with some non linear transformations at end of the day

2

was_der_Fall_ist t1_je6lfl9 wrote

Why are matrix multiplications mutually exclusive with complicated operations?

A computer just goes through a big series of 0s and 1s, yet through layers of abstraction they accomplish amazing things far more complicated than a naive person would think 0s and 1s could represent and do. Why not the same for a massive neural network trained via gradient descent to maximize a goal by means of matrix multiplication?

1