billy_of_baskerville
billy_of_baskerville t1_ivqbeaj wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Thanks for posting!
Just in case people in the community are interested, I also wrote a blog post recently on a related subject, namely whether large language models can be said to "understand" language and how we'd know: https://seantrott.substack.com/p/how-could-we-know-if-large-language
There are at least two opposing perspectives on the question, and one of them (the "axiomatic rejection view") basically adopts the Searle position; the other (the "duck test view") adopts a more functionalist position.
billy_of_baskerville t1_ivqb6e1 wrote
Reply to comment by waffles2go2 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.
Well put, I agree.
billy_of_baskerville t1_ivqppb2 wrote
Reply to comment by timscarfe in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Thanks!