Viewing a single comment thread. View all comments

druffischnuffi t1_iylvkt0 wrote

I still do not get why people keep saying that AI is "not truly learning" or "not actually intelligent".

They always invent some weird criteria that a "true AI" would need to satisfy, for example that it can learn affine transformations without being taught to or that it must be immune to adversarial attacks.

If you think you are truly learning because your brain figured out affine transformations on its own, try reading a book upside down

5

Difficult-Race-1188 OP t1_iylz2aw wrote

What people mean when they say AI is not truly learning is that often the most impressive results are coming from extremely big models. For example, almost all the top AI scientist takes dig on Large language models, because we don't know whether they learned something or it just memorized all the possible combinations. Why people believe AI is not truly learning is that there are papers that show that AI was unable to generalize to simple mathematical equations.

x³ + xy² + y (mod 97), AI was unable to generalize to this simple equation.

https://medium.com/aiguys/paper-review-grokking-generalization-and-over-fitting-9dbbec1055ae

https://arxiv.org/abs/2201.02177

−1

druffischnuffi t1_iym6mjv wrote

I agree. That is very unsatisfactory. I also think that NNs are often being overestimated.

However, I think what is lacking in the line of reasoning is a positive definition of true learning. A test that an AI must pass if it is truly learning.

I myself would not consider myself able of generalizing a set of samples to the above equation. So does that mean I cannot learn?

2