Submitted by NiahBoahCoah t3_z5p0oe in Futurology

Disclaimer:I am still a high school student and don’t know much about this subject. This is just a thought I had)

I don’t think we have achieved artificial intelligence. Alexa, Google, or any of these algorithms dependent programs don’t seem like true AI to me. Is Siri really AI? What it is basically doing is hearing your command and putting it into a google search bar. I don’t think that is AI. If you ask it a question, it will most likely refer to another website. If this is AI, me going to Safari and looking up YouTube is also AI. Even advanced NASA robots don’t actually have intelligence. They are just presented with a scenario and it is run through a complex algorithm which then tells it what to do(I’m sure it’s way more complicated than this, but you get the point) I think AI really should stand for artificial independence. This is where it can learn on its own without an algorithm. I think it shouldn’t need to rely on the internet and should just be able to think, learn, and feel emotion on its own. I know this might not even be possible, but I don’t think we can call what we have now true AI.

0

Comments

You must log in or register to comment.

Shoban_Gunzeye t1_ixxaz4e wrote

AI is not done building the blocks to become fully efficient on its own.. what you are trying to explain is a very feasible idea.. but once that Pandora's box is open, what will happen to humanity?

1

NiahBoahCoah OP t1_ixxbp3d wrote

That is a problem that I came across too. If they become smarter than us and can have emotions, some of them will do bad things. What we can do is restrict them from doing so, but I don’t think that is AI if something is restricting them from thinking a certain way. So this leads to a more philosophical problem that will be really hard to solve.

1

clusterlizard99 t1_ixxe4cd wrote

You are saying that we don't have strong AI with consciousness, which is true. But the field of AI also includes various algorithmic tasks such as pattern, speech, image recognition and many others. We can't mimic the complete human brain currently through software, but we can replicate many of the complex tasks it performs.

6

MrCat-1223 t1_ixxyg0a wrote

Even the scientific calculators are AI. It's just as we advanced we don't consider it. And yah, The real AI is not yet in this planet.

0

iNstein t1_ixya89p wrote

Go onto YouTube and search for GPT-3. After your mind is blown away by that, search for Lambda AI. Then search for Dalle-2, stable diffusion and mid journey. Finally cons that GPT-4 is expected to be released in the next few months and is rumoured to be very significantly better than GPT-3.

2

NiahBoahCoah OP t1_ixyadm9 wrote

I have seen AI generated art and it is certainly impressive, but it seems more like they you give them a prompt, and it goes onto the internet and finds the most related pictures and morphs it into one picture.

2

Meg0510 t1_ixyiagc wrote

The lack of independence you're referring to comes from the fact that the AI models dominating the current discussion (the GPT series, DALL-E, etc) are all statistical models that rely on the data that they're fed into. So they don't have the built-in mechanisms that would allow them to generate outputs (whether it's pictures, sentences, etc) that go beyond the exposed dataset.

(As you say, things get more complicated--there are statistical models that require inputting initial biases (e.g. Bayesian models, etc), but I'm putting those aside.)

Critics of modern AI (Noam Chomsky, Gary Marcus, etc) therefore argue that modern AI approaches will never achieve human-level intelligence because human minds aren't blank slates that rely totally on external data--there are lots of innate built-in mechanisms that allow them to generate outputs, even if they've never been exposed to the relevant data.

For example, research has shown that kids growing up in an environment where there's no linguistic input (look up "pidgin" and "creole" languages, if you're interested) will simply invent their own language which ends up having the same underlying structure as other languages (like English). (This is a case of the argument by "Poverty of the Stimulus"--which states that there aren't enough data in a child's environment to learn the language of his community, so there must be a built-in innate language faculty--dubbed "Universal Grammar"--that allows him to learn the language.)

Notice this is simply impossible for statistical models--statistical models don't do anything if it's not fed any data; it doesn't generate anything by itself (much less a full-fledged human language). But proponents of modern AI approaches (Yann Lecun, Yoshua Bengio, Demis Hassabi, etc) argue that statistical models can achieve human-level intelligence, and most argue that scaling is all you need--i.e. you just need more data. Their views stem from the empiricist tradition that the mind is the product of the external data it is exposed to--no innate mechanisms needed.

So yes, your concern is very much debated--though most people in the modern AI community are on the data-centric side. The empiricist (i.e. data-centric) people argue more data is all you need to achieve a fully generative system. The nativist (i.e. in support of innate mechanisms) people argue that in order to achieve the generative capacity of a human being--which largely seems to be "independent" of external inputs as you say--you need to first flesh out the built-in innate mechanisms that human beings seem to have. So I'd look into those discussions if you want to probe further.

3

nocofoconopro t1_iy2q97j wrote

How different is a human and a computer? People arrive with code. It’s all code.

1

nocofoconopro t1_iy4g2my wrote

Parts of human DNA (genes) hold the code. RNA deciphers and translates the code. Your OG human cells knew what to do, because of the instructions or code, given to become the proteins that are you. Many will debate the facets of this question and bring human error to the conclusion. Humans are self building. Humans are an intelligence.

1