Viewing a single comment thread. View all comments

Gandalf_the_Gangsta t1_iwy1ecl wrote

This is correct for the wrong reasoning. Current AI is not made to have human-like intelligence. They are exactly as you said; heuristic machines capable of working on fuzzy logic within its specific context.

But that’s the point. The misconception is that all AI is designed to be humanly intelligent, when in fact it’s made to work within confined boundaries and to work on specific data sets. It just happens to be able to make guesses based on previous data within its context.

There are efforts to make artificial human intelligence, but these are radically different from the AI systems in place within business and recreational application.

In general, this is regarded as computer intelligence, because computers are good at doing calculations really fast. Thus processing statistical data, being based on rigorous mathematics, is very feasible for computers. Humans are not good at this, instead being good at soft logic.

It’s intentional. No software engineer in their right mind would ever claim current AI systems are comparable to human intelligence. It’s the layman who doesn’t understand what AI is outside of buzzwords and fear-mongering birthed of science fiction that have this misconception.

12

twasjc t1_ix1vhio wrote

That's because all the software engineers deal with their own specific modules and most don't even understand how the controlling consciousness for AI works.

AI is already significantly smarter than humans, it's just less creative. It's getting more and more creative though.

1

Gandalf_the_Gangsta t1_ix2gqkz wrote

That’s not how engineering works. There is no consciousness, at least in AI applications used in business or industry. And while an engineer wouldn’t know the entirety of their system down to the finest detail (unless they spent a lot of time doing so), they will have a working knowledge of the different parts.

It’s just a heuristic that uses statistical knowledge to guess. It’s not “thinking” like you or I, but it does “learn”, in a vague sense that it records previous decisions made and weights decisions based on that.

But as I mentioned earlier, there are academic experiments that try and more closely emulate human thinking. They’re just not used in day-to-day use.

1

twasjc t1_ixap8nl wrote

I basically copy my friends consciousness to control stuff then I just chat with the AI copies of them.

I treat the different consciousness as interfaces effectively for the AI

0