Viewing a single comment thread. View all comments

visarga t1_j6aeq98 wrote

Scaling model size continues but obtaining more organic data is over, we are at the limit. So the only way is to generate more, but they need humans in the loop to check quality. It's also possible to generate data and verify with math, code execution, simulation or other means. And AnthropicAI showed a pure LLM way to bootstrap more data (RLAIF or Constitutional AI).

I bet OpenAI is just taking the quickest route now. For example, we know that using 1800 tasks in pre-training makes the model generalise to many more tasks at first sight (Flan T5). But OpenAI might have 10,000 tasks to train their model on, hence superior abilities. They also put more effort in RLHF, so they got a more helpful model.

Besides pure organic text, there are other sources - transcribed or described videos is a big one. They released the Whisper model and it's possible they are using it to transcribe massive video datasets. Then there are walled gardens - social networks generate tons of text, not the best quality though. There is also a possibility to massage data collection as game play and get people to buy into providing exactly what they need.

12

VirtualHat t1_j6bi3xk wrote

Video and audio might be the next frontier. Although, I'm not too sure how useful it would be. Youtube receives over 500 hours of uploads per minute, providing an essentially unlimited pipe of training data.

6

luaks1337 t1_j6chxhv wrote

Also spoken words differ a lot from thoughtful written text. Training on the 1:1 transcription would yield bad results in terms of grammar and readability. They could solve this by using a GPT model to rewrite the transcription but then you're training AI on AI which could lead to bias.

1

VirtualHat t1_j6ckblf wrote

I was thinking next frame prediction, perhaps conditioned on the text description or maybe a transcript. The idea is you could then use the model to generate a video from a text prompt.

I suspect this is far too difficult to achieve with current algorithms. It's just interesting that the training data is all there, and would be many, many orders of magnitude larger than GPT-3's training set.

2

luaks1337 t1_j6clz9v wrote

Ah, I thought you meant that video and audio would be the next step for text mining.

I believe OpenAI confirmed that they already work on a text to video model. My guess would be that current algorithms could do that but that it would be far to expensive to train on videos.

2

currentscurrents t1_j6btqta wrote

Frankly though, there's got to be a way to do with less data. The typical human brain has heard maybe a million words of english and about 8000 hrs of video per year of life. (and that's assuming dreams are generative training data somehow - halve that if you only get to count the waking world)

We need something beyond transformers. They were a great breakthrough in 2018, but we're not going to get to AGI just by scaling them up.

6

visarga t1_j6c1rmo wrote

Humans are harder to scale, and it took billions of years for evolution to get here, with enormous resource and energy usage. A brain trained by evolution is already fit for the environment niche it has to inhabit. But an AI model has none of that, no evolution selecting the internal structure to be optimal. So they have to compensate by learning these things from tons of raw data. We are great at some tasks that relate to our survival, but bad at other tasks, even worse than other animals or AIs - we are not generally intelligent either.

Also, most AIs don't have real time interaction with the world. They only have restricted text interfaces or APIs, no robotic bodies, no way to do interventions to distinguish causal relations from correlations. When an AI has feedback loop from the environment it gets much better at solving tasks.

3

vivehelpme t1_j6cno58 wrote

22 hours of video content per day?

1

currentscurrents t1_j6e4get wrote

I rounded. Data collection is like astronomy, it's the order of magnitude that matters.

1

MysteryInc152 t1_j6jkmus wrote

The human brain has trillions of synapses (the closest biological equivalent to parameters), is multimodal and evolution fine-tuned.

1

currentscurrents t1_j6m3ik5 wrote

We could make models with trillions of parameters, but we wouldn't have enough data to train them. Multimodality definitely allows some interesting things but all existing multimodal models still require billions of training examples.

More efficient architectures must be possible - evolution has probably discovered one of them.

1