Comments

You must log in or register to comment.

Sashinii t1_j2bp5mo wrote

GPT-4 is rumored to release in either December (nope), January or February, and while we don't know, it seems likely it'll release in 2023, but if it doesn't, there's other AI's to look forward to.

30

Superschlenz t1_j2bxhmp wrote

There will be no GPT-4. Microsoft did not pay OpenAI one billion dollar without a reason. The next GPT will be called GPT 2000™.

30

Superschlenz t1_j2c14oc wrote

Once again, the Giga-Parameter Terminator says: "Hasta la vista, baby!"

And I thought that "The machines gonna kill us all! We must stop machine learning research!" has been replaced by "The machines will get conscious and suffer! We must stop machine learning research!"

0

Lawjarp2 t1_j2cb89x wrote

Rumoured to be Q1 2023. Not sure if that will be a public or a closed beta testing. It is not a given that it will be immediately part of chatGPT.

2

justowen4 t1_j2ckhjk wrote

I wonder if Microsoft planned this from the start? It’s so perfect GitHub, vscode, codex, copilot

2

visarga t1_j2czecj wrote

Let me lay out my GPT-4 speculations:

  • larger model?

If they go to 1T parameters, the model would be hard to use. Even a demo might be impractical. I think they would prefer to keep it at the same size. In fact, it is desirable to have a 10-30B model as good as GPT-3, for deployment cost reduction. It's bloody expensive.

  • much more training data?

Most of the good training data is already scraped, but maybe there is still some left to surprise us. Maybe they transcribed the whole YouTube to generate a massive text dataset.

  • more task data?

This is feasible, recent papers showed how you can bootstrap task+solution data by clever prompting. This self generated task data is more diverse than human generated one.

  • more problem data?

Maybe they are solving millions of coding and math problems, where it is possible to filter out garbage outputs by exact verification/code execution. This can bootstrap a model to surpass human level because it is learning not from us, but from the execution feedback.

  • better human preferences data?

Probably not, if they had that they would have used it on chatGPT.

  • adding image and other modalities to text?

This could be the biggest change. It would open usage of language models in robotics and UI automation, with huge implications for the job market. No longer will these models be limited to a text box. But it is hard to do efficiently.

  • language model with toys?

Burning in all that trivia in the weights of a model is inefficient. Instead, why not use a search engine to tell us the height of Everest? A search engine could be a great addition for the language model. Also, calculator and even code execution. Armed with these "toys" a language model would be able to check factuality and ensure correct computations.

As for the date? Probably not in the next 2-3 months, as they already released chatGPT with great acclaim. They got to milk the moment for all the PR. It sounds like the rumours about GPT-4 are pretty bullish, I hope it is true.

6