Viewing a single comment thread. View all comments

Zermelane t1_izvvg1v wrote

I enjoy all the comments completely failing to get that OP wasn't making an argument from fast capability gain post-AGI.

FWIW, I don't really 100% agree with the argument myself. Integration and generalization have costs. If for instance you just want to generate random images of human faces, our best text-to-image diffusion models are much, much more expensive to train and run than an unconditional StyleGAN2 trained on FFHQ, and still have a hard time matching how well it does at that task. These costs might turn out very large once we're really trying to do AGI.

That said, you can take the the fast capability gain argument and make it relevant here again: Having an AGI should make it a lot easier to take all the research we've done into reaching superhuman capability in all sorts of narrow domains, and integrate it into one agent.

If nothing fancier, that might simply mean doing the programming to, say, set up an AlphaGo instance and call out to it when someone wants you to play Go, etc., and that does indeed get you an agent that, as far as you can tell from the outside, is an AGI and also superhuman at Go.

3

__ingeniare__ OP t1_izwas4g wrote

Glad to see someone got it hahah. Yeah, that's something I thought about as well. It's a general "law" for any machine that the more specialized it is, the better it is at that task, and the more general it is, the worse it will be at any one given task, all else being equal.

I think the integration cost depends a lot on how interconnected the different capabilities must be. For example, it wouldn't be a huge leap to combine ChatGPT with Stable Diffusion or an image classifier since they use a common interface (language). But I don't know if that will be sufficient for AGI/ASI. I agree that it may turn out to be harder than expected and the performance would suffer as a consequence, good input!

2