Viewing a single comment thread. View all comments

economy_programmer_ t1_j098oz2 wrote

I strongly disagree.
First of all, you should define the "philosophical sense of fly", and second of all, try to imagine a perfect robotic replica of the anatomy of a bird, why that should not be considered fly? And if it is considered flying, what's the line that divides an airplane, a robotic bird replica and a real bird? I think you are reducing a philosophical problem to a mechanical problem.

−2

Nameless1995 t1_j09c3f0 wrote

It was a satire.

15

economy_programmer_ t1_j09cozr wrote

I don't think so

−7

Nameless1995 t1_j09eifz wrote

/u/mocny-chlapik thinks OP paper is suggesting that LLMs don't understand by pointing out that differences in how humans understand and how LLMs "understand". /u/mocny-chlapik is criticizing this point by showing that this is similar to saying aeroplanes don't fly (which they obviously do under standard convention) just because of the differences in the manner in which they fly and in which birds do. Since the form of the argument doesn't apply in the latter case, we should be cautious of applying this same form for the former case. That is their point. If you think it is not a satire meant to criticize OP, why do you think a comment is talking about flying in r/machinelearning in a post about LLMs and understanding?

13

Pikalima t1_j0az5t9 wrote

I don’t know who was the first to use the analogy to bird flight, but it’s a somewhat common refutation used in philosophy of AI. That’s just to say, it’s been used before.

1