Submitted by Lesterpaintstheworld t3_11ccqjr in singularity
Lesterpaintstheworld OP t1_ja402r0 wrote
Reply to comment by AsheyDS in Raising AGIs - Human exposure by Lesterpaintstheworld
One of the difficulties have been sewing together different types of data (text & images, other percepts, or even lower levels). I wonder what approaches could be relevant
AsheyDS t1_ja46bxe wrote
I'm not sure if you can find anything useful looking into DeepMind's Gato, which is 'multi-modal' and what some might consider 'Broad AI'. But the problem with that and what you're running into is that there's no easy way to train it, and you'll still have issues with things like transfer learning. That's why we haven't reached AGI yet, we need a method for generalization. Looking at humans, we can easily compare one unrelated thing to another, because we can recognize one or more similarities. Those similarities are what we need to look for in everything, and find a root link that we can use as a basis for a generalization method (patterns and shapes in the data perhaps). It shouldn't be that hard for us to figure out, since we're limited by the types of data that can be input (through our senses) and what we can output (mostly just vocalizations, and both fine and gross motor control). The only thing that makes it more complex is how we combine those things into new structures. So I would stay more focused on the basics of I/O to figure out generalization.
Viewing a single comment thread. View all comments