Viewing a single comment thread. View all comments

DukkyDrake t1_j6ut4m2 wrote

0

CertainMiddle2382 t1_j6vwbrd wrote

Well we don’t actually know what “thinking” is.

And as the most abstract human production, language seems a great place to find out…

4

purepersistence OP t1_j6w2xl7 wrote

Starting with language is a great way to SIMULATE intelligence or understanding by grabbing stuff from a bag of similar text that's been uttered by humans in the past.

The result will easily make people think we're ahead of where we really are.

2

CertainMiddle2382 t1_j6wwyvp wrote

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

In all honesty, I don’t really know if Im really thinking/aware, or just a biological neural network interpreting itself :-)

2

purepersistence OP t1_j6x005a wrote

>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.

1