Viewing a single comment thread. View all comments

Meg0510 t1_ixyiagc wrote

The lack of independence you're referring to comes from the fact that the AI models dominating the current discussion (the GPT series, DALL-E, etc) are all statistical models that rely on the data that they're fed into. So they don't have the built-in mechanisms that would allow them to generate outputs (whether it's pictures, sentences, etc) that go beyond the exposed dataset.

(As you say, things get more complicated--there are statistical models that require inputting initial biases (e.g. Bayesian models, etc), but I'm putting those aside.)

Critics of modern AI (Noam Chomsky, Gary Marcus, etc) therefore argue that modern AI approaches will never achieve human-level intelligence because human minds aren't blank slates that rely totally on external data--there are lots of innate built-in mechanisms that allow them to generate outputs, even if they've never been exposed to the relevant data.

For example, research has shown that kids growing up in an environment where there's no linguistic input (look up "pidgin" and "creole" languages, if you're interested) will simply invent their own language which ends up having the same underlying structure as other languages (like English). (This is a case of the argument by "Poverty of the Stimulus"--which states that there aren't enough data in a child's environment to learn the language of his community, so there must be a built-in innate language faculty--dubbed "Universal Grammar"--that allows him to learn the language.)

Notice this is simply impossible for statistical models--statistical models don't do anything if it's not fed any data; it doesn't generate anything by itself (much less a full-fledged human language). But proponents of modern AI approaches (Yann Lecun, Yoshua Bengio, Demis Hassabi, etc) argue that statistical models can achieve human-level intelligence, and most argue that scaling is all you need--i.e. you just need more data. Their views stem from the empiricist tradition that the mind is the product of the external data it is exposed to--no innate mechanisms needed.

So yes, your concern is very much debated--though most people in the modern AI community are on the data-centric side. The empiricist (i.e. data-centric) people argue more data is all you need to achieve a fully generative system. The nativist (i.e. in support of innate mechanisms) people argue that in order to achieve the generative capacity of a human being--which largely seems to be "independent" of external inputs as you say--you need to first flesh out the built-in innate mechanisms that human beings seem to have. So I'd look into those discussions if you want to probe further.

3