Language is an action we take to achieve some short- or long-term intent by affecting others' actions. It just so happens that text data is (mostly) symbolic, so it appears like only a problem of symbol manipulation. The text that these models are trained on are observations of language production, where utterances are generated from intent (e.g., wanting to convince someone of some argument, wanting to sell something to someone) and context (e.g., what you know about your interlocutor). This doesn't even cover vocal / signed communication, which is much more continuous.
Intent and context are not purely symbolic. Sure, with infinite observations, that generative structure would be perfectly reconstructable. But we are nowhere near that, and humans are completely capable of modeling that generative process with very little data and continuous input (which we learn to discretize).
alsuhr t1_j1zftk4 wrote
Reply to comment by comefromspace in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
Language is an action we take to achieve some short- or long-term intent by affecting others' actions. It just so happens that text data is (mostly) symbolic, so it appears like only a problem of symbol manipulation. The text that these models are trained on are observations of language production, where utterances are generated from intent (e.g., wanting to convince someone of some argument, wanting to sell something to someone) and context (e.g., what you know about your interlocutor). This doesn't even cover vocal / signed communication, which is much more continuous.
Intent and context are not purely symbolic. Sure, with infinite observations, that generative structure would be perfectly reconstructable. But we are nowhere near that, and humans are completely capable of modeling that generative process with very little data and continuous input (which we learn to discretize).