Meg0510

Meg0510 t1_ixyiagc wrote

The lack of independence you're referring to comes from the fact that the AI models dominating the current discussion (the GPT series, DALL-E, etc) are all statistical models that rely on the data that they're fed into. So they don't have the built-in mechanisms that would allow them to generate outputs (whether it's pictures, sentences, etc) that go beyond the exposed dataset.

(As you say, things get more complicated--there are statistical models that require inputting initial biases (e.g. Bayesian models, etc), but I'm putting those aside.)

Critics of modern AI (Noam Chomsky, Gary Marcus, etc) therefore argue that modern AI approaches will never achieve human-level intelligence because human minds aren't blank slates that rely totally on external data--there are lots of innate built-in mechanisms that allow them to generate outputs, even if they've never been exposed to the relevant data.

For example, research has shown that kids growing up in an environment where there's no linguistic input (look up "pidgin" and "creole" languages, if you're interested) will simply invent their own language which ends up having the same underlying structure as other languages (like English). (This is a case of the argument by "Poverty of the Stimulus"--which states that there aren't enough data in a child's environment to learn the language of his community, so there must be a built-in innate language faculty--dubbed "Universal Grammar"--that allows him to learn the language.)

Notice this is simply impossible for statistical models--statistical models don't do anything if it's not fed any data; it doesn't generate anything by itself (much less a full-fledged human language). But proponents of modern AI approaches (Yann Lecun, Yoshua Bengio, Demis Hassabi, etc) argue that statistical models can achieve human-level intelligence, and most argue that scaling is all you need--i.e. you just need more data. Their views stem from the empiricist tradition that the mind is the product of the external data it is exposed to--no innate mechanisms needed.

So yes, your concern is very much debated--though most people in the modern AI community are on the data-centric side. The empiricist (i.e. data-centric) people argue more data is all you need to achieve a fully generative system. The nativist (i.e. in support of innate mechanisms) people argue that in order to achieve the generative capacity of a human being--which largely seems to be "independent" of external inputs as you say--you need to first flesh out the built-in innate mechanisms that human beings seem to have. So I'd look into those discussions if you want to probe further.

3

Meg0510 t1_iwgb5cd wrote

Not an expert (just a humble physics bachelors) but the problem isn't of whether you can "fit" one into the other: the real line (0,1) is in one-to-one correspondence with the whole real line, for example. The problem is that no infinite system can be simulated by us mortal finite beings.

For a slightly more boring situation: The Church-Turing-Deustche principle states if you assume every physical process can be completely described by quantum mechanics, then every finitely realizable physical system can be simulated by a universal quantum computer. So if you don't mind a reasonable finite approximation to whatever physical system you're interested in simulating, and you have the means to build a machine that can simulate them, there you have it.

Something else I heard on the passing: Apparently a string theorist had found what looks like error-correcting codes in his work (https://www.space.com/32543-universe-a-simulation-asimov-debate.html). Now I don't know what that means, cuz I know 0 string theory. But maybe it shows our universe itself is simulated (wherever the damn thing is running), who knows lol

3

Meg0510 t1_itvx3ya wrote

> what sort of "future-proof" field(s) should I be looking into as a way to maintain (for lack of a better term) viability?

Yes, hence an answer to the question posted in the title

3

Meg0510 t1_ituupxu wrote

Chess I think is a great example. It's been 25 years since Kasparov was defeated by a program--but did human chess players get replaced by digital ones?

No--in fact, chess is now livelier than ever. And I think we can extend it to other areas--we're never going to watch F1 driven by self-driving cars, 100-meters races ran by super-running bots, jeopardy played by super search engines, etc.

(One can ofc envision a future where robots have their own sports--maybe a 100-miles race ran by super-running robots could be interesting, idk. But we value competition because the players are human beings, and we become impressed by their performance because they're hard to perform by other human beings--otherwise Magnus Carlsen would be of no interest to the world.)

So inter competetition seems to be at least one domain that's irreplaceable by machines, human nature being that we value competition with other people.

Edit: spelling

6