Viewing a single comment thread. View all comments

gahblahblah t1_j8c9oc4 wrote

Okay. You have rejected this theory of mind test. How about rephrasing/replacing this test with your own version that doesn't suffer the flaws you describe?

I ask this, because I have a theory that whenever someone post a test result like this, there are other people that will always look for an excuse to say that nothing is shown.

11

pitchdrift t1_j8cjr86 wrote

I think rephrasing so that they speak more naturally would help - who would really just say, "Great!" in this scenerio? Immediate flag that this is not an exchange of sincere emotion, even without all the other exposition.

3

Fit-Meet1359 OP t1_j8cqpdb wrote

Happy to re-run the test with any wording you can come up with. I know my question wasn’t perfect and didn’t expect the post to blow up this much.

2

vtjohnhurt t1_j8d9k88 wrote

> Immediate flag that this is not an exchange of sincere emotion

Only if your AI is trained on a bunch of Romance Novels. The emotion behind 'Great!' depends on the tone. The word can express a range of emotions. Real people say 'Great' and mean it sincerely, sarcastically, or some other way.

1

ArgentStonecutter t1_j8dc7n9 wrote

> Real people say 'Great' and mean it literally.

This too. And it's over a phone call. There's no body language, and he may have been busy and she's interrupting him. HE may have been driving as well.

1

vtjohnhurt t1_j8daii7 wrote

I don't think that we can confirm that the AI holds thoughts/mind from this kind of test, no matter how well written. What the AI writes in response may convince you or it may not.

1

gahblahblah t1_j8eyn2a wrote

This is what I was checking - that no matter how well it replied, and no matter how complex or nuanced the question, you would not find anything proved. That is what I thought.

1

ArgentStonecutter t1_j8dbyvi wrote

> How about rephrasing/replacing this test with your own version that doesn't suffer the flaws you describe?

How about not asking people to do work for free?

−1

gahblahblah t1_j8fc3fk wrote

The point of my reply was to get the critiquer to admit that there was actually no form of prompt that would satisfy them - which it did.

1

ArgentStonecutter t1_j8fcm2z wrote

They are not the only people involved in this discussion.

1

gahblahblah t1_j8fj7hb wrote

When you state something that, in of itself, is obviously true and known by everyone already, it seems like a waste of text/time/energy for you to write, and for anyone to read.

1

ArgentStonecutter t1_j8fjh6a wrote

But that's not what happened.

1

gahblahblah t1_j8fv6xu wrote

You have not understood my reply. I was describing your reply as useless and not explaining anything in a helpful way.

1

ArgentStonecutter t1_j8fyg26 wrote

You came in with this ambiguous scenario and crowing about how it showed a text generator had a theory of mind, because just by chance the text generator generated the text you wanted, and you want us to go "oh, wow, a theory of mind". But all its doing is generating statistically interesting text.

And when someone pointed that out, you go into this passive aggressive "oh let's see you do better" to someone who doesn't believe it's possible. That's not a valid or even useful argument. It's a stupid debate club trick to score points.

And now you're pulling more stupid passive aggressive tricks when you're called on it.

1

gahblahblah t1_j8gcb5d wrote

Thank you for clarifying your beliefs and assumptions.

>And when someone pointed that out, you go into this passive aggressive "oh let's see you do better" to someone who doesn't believe it's possible. That's not a valid or even useful argument. It's a stupid debate club trick to score points.

Wrong, in many ways. The criticism they had was of the particulars of the test - so it would appear as if there was a form of the test that they could judge as satisfactory. It was only after I challenged them to produce such a form, that they explained, actually, no form would satisfy them. So, you have gotten it backwards - my challenge yielded the useful result of demonstrating that initial criticism was disingenuous, as in reality, all that they criticised could have been different, and they still wouldn't change their view.

I wasn't being passive aggressive in asking someone to validate their position with more information - rather, I was soliciting information for which to determine if their critique was valid.

Asking for information is not 'a trick to score points', rather, it is the process of determining what is real.

>You came in with this ambiguous scenario and crowing about how it showed a text generator had a theory of mind, because just by chance the text generator generated the text you wanted, and you want us to go "oh, wow, a theory of mind". But all its doing is generating statistically interesting text.

This is a fascinating take that you have. You label this scenario as ambiguous- is there a way to make it not ambiguous to you?

To clarify, if I were to ask the bot a very very hard, complex, nuanced subtle question, and it answered in a long form coherent on-point correct reply - would you judge this as still ambiguous and only a demonstration of 'statistically interesting text', or is there a point where your view changes?

1