Viewing a single comment thread. View all comments

squareOfTwo t1_j7ooiz6 wrote

just no, the rate is still to damn slow for that. Most of the "progress" is just training with yet unused data (human written text for GPT, Text-Image pairs for the stable diffusions of this world etc. This will end soon if no high quality data is left to train). The end of "scale" is near.

11

zendonium t1_j7ovw92 wrote

But surely that's all it takes? The human brain is just a multimodal network that processes language, visual, audio, and a bunch of other stuff.

Pay 10,000 Kenyans $2 a day to get more training data on more senses and train more networks. We'll have narrow AGIs in almost all areas. Just needs putting together with some clever insight from some genius.

6

Cryptizard t1_j7p26uc wrote

If that was true then we could just train a model on all the AI research we have and get a “narrow AGI” that makes AI models. Singularity next week. Unfortunately, that is not how it is.

4

visarga t1_j7q4313 wrote

If they make GPT-N much larger, it will take longer and cost more to train. Then we can only afford a few trials. Whether they are selected by humans or AI makes little difference. It's going to be a crapshoot anyway, nobody knows what experiment is gonna win. The slow experimentation loop is one reason not even AGI can speed things up everytime.

2