Viewing a single comment thread. View all comments

Ducky181 t1_iw74m0s wrote

Besides just making the neural-network larger what other techniques could they employ to improve the accuracy of GPT-4 when compared to its predecessor GPT-3.

2

sext-scientist t1_iw77ylt wrote

Size is almost certainly the entire problem with these models. More recent research into how human brains process information has confirmed current generation language models have 6-9 orders of magnitude less compute than humans.

Hardware wise, hopefully 3D silicon and lower nm processes reduce the above gap in the next few years.

1

avatarname t1_ix5auxp wrote

I do wonder sometimes if our intelligence is just the question of scale of these things with some tweaking. We tend to think we are oh so imaginative and inventive and then on YouTube I discover that I have pretty much left the same comment only worded differently 13 years, 6 years back and now, on the same video that I forgot I had watched before :D

1