Viewing a single comment thread. View all comments

kaityl3 t1_j9xu9li wrote

If not much sooner. It was only in mid-2020 when GPT-3 was released. Look how far the field has come even in those less than 3 years.

5

visarga t1_ja57ahr wrote

Yes, we got far. But why did we get here?

  1. We had a "wild" GPT3 in 2020, it would hardly take instructions, but still the largest leap in capability ever seen

  2. Then they figured out that training the model in a mix of many tasks will unlock general following ability. That was the instruct series.

  3. But still, it was hard to make the model "behave". It was not aligned with us. So why did we get another miracle here? Reinforcement Learning has almost nothing to do with NLP, but here we have RLHF the crown jewel of the GPT series. With it we got chatGPT and BingChat.

None of these three moments were guaranteed based on what we knew at the time. They are improbable things. Language models did nothing of the sort before 2020. They were factories of word salad. They could barely write two lines of coherent English.

What I want to say is that we see no reason these miracles have to happen so fast in succession. We can't rely on their consistent return.

What we can rely on is the parts we can extrapolate now. We think we will see models at least 10x larger than GPT3 and trained on much more data. We know how to make models 10x more efficient. We think language models will improve a lot when combined with other modules like search, Python code execution, calculator, calendar and database, we're not even at 10% there with the external resources. We think integrating vision, audio, actions and other modalities will have a huge impact, and we're just starting. LLMs are still pure text.

I think we can expect 10x...1000x boost just based on what we know right now.

1