an_oddbody t1_ixxijnr wrote

This is probably not what you were hoping to hear, but there is currently a lot of debate about what it would mean for an AI to have true "understanding" as we know it. I have used it here to loosely mean having having a functional recognition of the laws of the world that the AI is exposed to. This means that by observing the world around it, it could apply these laws and be able to make reasonable predictions about the state of that world, make connections about the relationships of the states of various objects, and generally be able to asses how systems based on those laws operate.

Some people will say "Oh, but there's the Turing Test, right?" And yes, that's true. But the turing test only checks the degree of confidence people have that they are interacting with an understanding being. The program may have a 20-minute conversation about a variety of topics without truly understanding any of the topics. Just like how I can have a convincing 20-minute conversation with my in-laws about football, despite having no idea what any of the rules of football are. The program and I simply know what words to put together to seem natural.

Quanta magazine has some great articles that touch on the complexity of this issue. If you have some time, I reccomend checking them out.

What Does It Mean for AI to Understand?

Machines Beat Humans on a Reading Test. But Do They Understand?

And there are others, but these should be quite approachable.