seventyducks

seventyducks t1_j1z3w3a wrote

I think there's a fallacy here - it may be the case that language involves all manner of symbol manipulation in its extended manifestation, but there is still considerable evidence that LLMs are not wholly capable of what we mean when we talk about language. There are many capacities still missing in even the most powerful LLMs. It may be the case that more data and more scale and some clever tricks will resolve these issues—though I am skeptical—but from what I have seen, LLMs thus far demonstrate a very limited capacity for 'symbol manipulation.' Namely, they show capacities for generation of statistically plausible sequences of letters, but fail in obvious ways on other sophisticated forms of symbolic manipulation and reasoning.

I'd be curious to hear if you agree, or perhaps if you think that the current limitations in symbol manipulation will be overcome with more scale on same architectures? This was a core question in the AGI Debate hosted by Montreal AI last week, and it seems experts on the subject are quite divided.

18