Viewing a single comment thread. View all comments

__ingeniare__ OP t1_izu5acw wrote

True, depends on where you draw the line. On the other hand, even something that is simply smarter than the smartest human would lead to recursive self-improvement as it develops better versions of itself, so truly god-like intelligence may not be that far off afterwards.

11

Cryptizard t1_izu5jlk wrote

Sort of, but look how long it takes to train these models. Even if it can self improve it still might take years to get anywhere.

1

__ingeniare__ OP t1_izu745z wrote

It's hard to tell how efficient training will be in the future though. According to rumours, GPT-4 training has already started and the cost will be significantly less than that of GPT-3 because of a different architecture. There will be a huge incentive to make the process both cheaper and faster as AI development speeds up. There are many start-ups developing specialized AI hardware that will be used in the coming years. Overall, it's hard to tell how this will play out.

6

BadassGhost t1_izvcxeg wrote

This is really interesting. I think I agree.

But I don't think this necessarily results in a fast takeoff to civilization-shifting ASI. It might be initially smarter than the smartest humans in general, but I don't know if it will be smarter than the smartest human in a particular field at first. Will the first AGI be better at AI research than the best AI researchers at DeepMind, OpenAI, etc?

Side note: it's ironic that we're discussing the AGI being more general than any human, but not expert-level at particular topics. Kind of the reverse of the past 70 years of AI research lol

1