Gandalf_the_Gangsta t1_ix2gqkz wrote

That’s not how engineering works. There is no consciousness, at least in AI applications used in business or industry. And while an engineer wouldn’t know the entirety of their system down to the finest detail (unless they spent a lot of time doing so), they will have a working knowledge of the different parts.

It’s just a heuristic that uses statistical knowledge to guess. It’s not “thinking” like you or I, but it does “learn”, in a vague sense that it records previous decisions made and weights decisions based on that.

But as I mentioned earlier, there are academic experiments that try and more closely emulate human thinking. They’re just not used in day-to-day use.


Gandalf_the_Gangsta t1_iwy1ecl wrote

This is correct for the wrong reasoning. Current AI is not made to have human-like intelligence. They are exactly as you said; heuristic machines capable of working on fuzzy logic within its specific context.

But that’s the point. The misconception is that all AI is designed to be humanly intelligent, when in fact it’s made to work within confined boundaries and to work on specific data sets. It just happens to be able to make guesses based on previous data within its context.

There are efforts to make artificial human intelligence, but these are radically different from the AI systems in place within business and recreational application.

In general, this is regarded as computer intelligence, because computers are good at doing calculations really fast. Thus processing statistical data, being based on rigorous mathematics, is very feasible for computers. Humans are not good at this, instead being good at soft logic.

It’s intentional. No software engineer in their right mind would ever claim current AI systems are comparable to human intelligence. It’s the layman who doesn’t understand what AI is outside of buzzwords and fear-mongering birthed of science fiction that have this misconception.