Viewing a single comment thread. View all comments

Brilliant_Lychee_86 t1_j1nrlud wrote

I asked it to help me find a sewing pattern for a jumpsuit that would be useful in an apocalypse.

I got the "inappropriate" auto response.

17

GravySquad t1_j1nsc0i wrote

Ugh. So annoying. Make sure to leave negative feedback when it denies a prompt that is not offensive or against TOS. GPT-3 beta playground did not ever do this!

7

jomo666 t1_j1o3dkn wrote

Was there a window of time where everything was available, so long as it was indexable by the AI? Or are you saying the beta was better at parsing queries that were truly inappropriate from those like OP’s, where the ‘inappropriate’ word is contextually not so?

1

GravySquad t1_j1o4ry5 wrote

as i remember (and the beta playground should still be available to test) it would give you a legitimate response to anything that you typed but it would flag you with a TOS warning.

i remember having it generate rather disgusting alien smut (for science) and it just kept going with heavy details. but a red label would appear at the top basically telling me to knock it off.

1

m0fugga t1_j1oowrb wrote

Oof, I should probably read the TOS...

1

jtotal t1_j1o7s56 wrote

Dude, I asked it to give me a bad version of macaroni and cheese, it told me it would be wrong to give out such instructions, followed by a recipe for macaroni and cheese.

I just wanted a quick laugh. I tried variations of the same prompt. Now the AI thinks I'm obsessed with macaroni and cheese.

3

djaybe t1_j1pz7pq wrote

frame the question for theater and ask for the main character in the dramatic movie or play. there are many “jailbreaks” people have discovered. some have been “patched”.

1