Analog_AI t1_j2ed2s8 wrote
Reply to comment by troll_khan in GPT-3 scores better than humans on Raven’s Progressive Matrices in a display of emergent analogical reasoning by visarga
They were saying it won’t be larger than GPT-3. They want to focus on finessing it and squeezing more out or customizing and pre training before they increase size any further. It will be much better than it’s predecessor anyway. If this approach yields good harvest they won’t increase the size of GPT-5 either. They will only do so if they hit a wall.
Borrowedshorts t1_j2ffld8 wrote
It seems like most AI companies have been doing this for now. I wonder if they're optimizing a local maxima instead of a global and that the global can only be achieved through further scale.
Analog_AI t1_j2fjzjc wrote
They will not stop increasing size. More like taking a breather and squeezing progress from maximizing pre training Also waiting for more cost reduction for scaling. Just a breather. And they also work on tech for beyond the GPT models. More integration of vision recognition and other things will come. Some form of weak AGI will be here in the next 5 years.
Viewing a single comment thread. View all comments