Viewing a single comment thread. View all comments

red75prime t1_izxf1q2 wrote

> This is weird.

The model doesn't know what it can and cannot do, so it bullshits its way out. It's not that weird.

7

Ghostglitch07 t1_izy5qmb wrote

It's weird because of how quick it is to claim it is unable to do things. In their attempt to make it safer they severely limited it's useability.They drilled the boilerplate text of "as a large language model trained by OpenAI I can't..." So hard that it throws it out far too often.

9

LetMeGuessYourAlts t1_j035ugy wrote

And if you carry a similar prompt over to the playground and run it on a davinci-003 model it will still attempt to answer your question without just giving up like that, so it's likely outside the model itself producing that response and then just having the model complete the error message. I was wondering if perhaps if confidence was low if it just defaults to an "I'm sorry..." and then let's the model produce the error.

1

Acceptable-Cress-374 t1_izxfjr3 wrote

It's weird because it worked for me. I've explained above how I got it to expand on previous points.

1

red75prime t1_izxgjcg wrote

It's not weird that it worked too. The model has access to around 3000 last words in the conversation, so it can "remember" recent text. But the model doesn't know that it has that ability, so it cannot reliably answer whether it can do it.

If you tell the model that it just remembered the first thing you've said, it will probably flip around and apologize for misinformation. And then, down the line, when the conversation is out of its input buffer, it will make the same error.

1

Toxhax t1_j00a098 wrote

This is how real humans handle everyday life. The machines really are getting good.

1