billy_of_baskerville

billy_of_baskerville t1_ivqbeaj wrote

Thanks for posting!

Just in case people in the community are interested, I also wrote a blog post recently on a related subject, namely whether large language models can be said to "understand" language and how we'd know: https://seantrott.substack.com/p/how-could-we-know-if-large-language

There are at least two opposing perspectives on the question, and one of them (the "axiomatic rejection view") basically adopts the Searle position; the other (the "duck test view") adopts a more functionalist position.

2

billy_of_baskerville t1_ivqb6e1 wrote

>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.

Well put, I agree.

1