Viewing a single comment thread. View all comments

genshiryoku t1_j6ahc38 wrote

I think the next 5 years will be one of explosive AI progress but sudden and rapid stagnation and an AI winter will follow after that.

The reason I think this is because we're rapidly running out of training data as bigger and bigger models essentially get trained on all the available data on the internet. After that data is used up there will be nothing new for bigger models to train on.

Since hardware is already stagnating and data will be running out the only way to make progress would be to make breakthroughs on the AI architectural front, which is going to be linear in nature again.

I'm a Computer Scientist by trade and while I work with AI systems on a daily basis and keep up with AI papers I'm not an AI expert so I could be wrong on this front.

13

visarga t1_j6arwxp wrote

Generating data through RL like AlphaGo or "Evolution through Large Models" (ELM) seems to show a way out. Not all data is equally useful for the model, for example problem and task solving is more important that raw organic text.

Basically use LLM to generate and another system to evaluate, in order to filter the useful data examples.

5

DarkCeldori t1_j6bc0b8 wrote

The brain can learn even with few data. A baby that grows in a mostly empty room and hears his parents voices still becomes fully competent within a few years.

If ai begins to use brain like algorithms given it does millions of years of training, data will not be a problem.

4

PreferenceIll5328 t1_j6d98c1 wrote

The brain is also pre trained through billions of years of evolution. It isn’t a completely blank slate.

4

DarkCeldori t1_j6db3zz wrote

Iirc only 25MB of design data for the brain lies in the genome that is insufficient to specify 100~trillion connections. Most of the brain particularly the neocortex appears to be a blank slate. Outside prewiring such as overall connectivity between areas it appears it is the learning algorithms that are the special sauce.

There are plenty of animals with as much baked in and they show very limited intelligence.

1

GoSouthYoungMan t1_j6c4zym wrote

But the brain appears to have massively more effective compute than even the largest AI systems. The chinchilla scaling laws suggest we need much larger systems.

1

DarkCeldori t1_j6cxa7w wrote

I don't think the brain's prowess lies in more effective compute but rather in its more efficient algorithms.

IIRC mimicking brain sparsity allowed ANN to get 10x to 100x more performance. And that is just one aspect of brain algos. https://youtu.be/XoP3dnvj4P0

3

BehindThyCamel t1_j6aj21s wrote

Do you think that 5-year period of progress will include training models on audiovisual material (movies, documentaries, etc.), or are we too far technologically from the capacity required for that, or is that not even a direction to pursue?

2