#
**DragonForg**
t1_jdkb8w9 wrote

Reply to comment by **Maleficent_Refuse_11** in **[D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments** by **QQII**

This is fundamentally false. Here is why.

In order to prove something and then prove it incorrect you need distinct guidelines. Take gravity their are plenty of equations, plenty of experiments etc. We know what it looks like, mathematically what it is and so on. So if we take a computation version of gravity we have a reliable comparison method to do so. Someone can say this games gravity doesn't match with ours as we have distinct proofs for why it doesn't.

However what we are trying to prove/disprove is something we have 0 BAISIS ON. We barely understand the brain, or consciousness or why things emerge the way we do, we are no where near close enough to make strict definitions of theory of mind or creativity. The only comparison is if it mimics ours the most.

Stating it doesnt follow my version of theory of mind is ridiculous its the same as saying my God is real and yours isn't, your baises of why we have creativity is not based on a distinct proved definition but rather an interpretation of your experiences studying/learning it.

Basically our mind is a black box too, we only know what comes out not what happens inside. If both machine and human get the same output and the same input, it legitimately doesnt matter what happens inside. Until we either can PROVE how the brain works to exact definitions. Until then input and output data is sufficient enough for a proof otherwise AI will literally kill us because we keep obsessing over these definitive answers.

It's like saying nukes can't do this or that. Instead of focusing on the fact that nuclear weapon can destroy all of humanity. The power of these tools just like nuclear weapons shouldn't be understated because of semantics.

Viewing a single comment thread. View all comments