Viewing a single comment thread. View all comments

neelankatan t1_j54yo4b wrote

Is this chatgpt just copying jokes from it's huge corpus of training data, or actually making these jokes up? If so, that's fucking amazing

26

Trevor_GoodchiId t1_j5536ux wrote

It's more complex than just copying - it can pick up contexts in which word sequences occur from multiple examples, even if those weren't arranged that specific way originally.

That said, it has no understanding of what meerkats, cards or jokes are, just that this text in this order may occur statistically in relation to user query.

This works well for narrative content, because there are no strict flow requirements and the result is error tolerant - we give it leeway as readers.

11

attofreak t1_j554sc5 wrote

That's it, and that is why I don't get the mania or paranoia around it. Technically, it's a great achievement and a step in right direction to create a machine better capable at understanding context. It is really good at ascertaining the right context and use of language. The only "disruptive" event going to happen is now you can go to chatGPT for queries rather than Google search. Already, other software can identify when a student just copies chatGPT response for assignment, so it didn't really negatively affect academia. It would be something if Google could match that context-awareness in its search algo. It already is quite good, but sometimes it is a bit difficult, especially with technical searches.

6

KamikaziAvalanche t1_j555e88 wrote

No, software can tell if a text is 100% grammatically correct and error free. The software has a HUGE problem with false positives and is just media hype at the moment.

2

Wolfe114M t1_j556jor wrote

Not just media hype, but hype funded by billionaire investors, There's a lot of fake accounts promoting AI and posting about it

There's a reason people make accounts to farm karma and sell.

And they will downvote your posts and award the positive ones

0

Talkat t1_j555kru wrote

I certainly wouldn't say it has no idea what a Meerkat or cards are.

Dall-e has the same structure as ChatGTP. With Dall-E you can ask for the back of a Meerkat, or a stack of cards in the shape of a Meerkat.

It deeply understands what the concept it and how to 'draw' it.

So ChatGTP certainly would have a conceptual understanding of ideas.

1

Voctus t1_j553egk wrote

When I’ve asked generically for a joke, it seems like I get an existing joke. But if you give it some specific parameters (“tell me an Ole and Lena joke about flying a kite”) then you get something structured like a joke but the punchline isn’t funny. The program doesn’t understand humor, it’s just stringing together words that are a “likely” response to your prompt

9

Themasterofcomedy209 t1_j553pmu wrote

It’s literally just copying jokes. I vividly remember reading the meerkat joke but with a different animal, then telling it to someone years ago.

Someone else in this thread even asked “tell me a joke why aren’t pelicans invited to play poker” and chatgpt just replies “because they always keep their cards close to their chest”

You can argue it’s what humans do but chatgpt is not thinking up jokes

9

flopflipbeats t1_j55rm2t wrote

Sometimes it is, sometimes it’s predicting language that it believes makes sense as a response to your prompt. With the right prompting, you can get it to create completely unique jokes

1

Archinatic t1_j551b36 wrote

Argueably not too different from the way humans would. It doesn't have a literal library of jokes. It is trained on jokes and based on that training it's network forms a certain logic that is then able to produce jokes on it's own.

4

say592 t1_j553em2 wrote

The whole point of ChatGPT is that its not just showing you information from other sources, everything is "original". Its trying to tell you what it thinks you want to hear based on what it has "observed" in the wild. So it has probably heard the response "It holds its cards too close to its chest" and decided that is a response that would make sense. As iterations go on and it receives feedback, it should get a better idea of how these responses work and whether people like them or not, and it will get better. Even just a couple of years ago if you asked a chatbot to write you a poem, you might use one line out of ten, then ask it again, use another line, etc until you have collected enough responses that make sense or are good. ChatGPT, on the other hand, tends to yield responses that are good enough the first time and can piece together a cohesive poem, story, article, etc.

1

fiftythreefiftyfive t1_j55424h wrote

Chatgpt has the ability to connect concepts (which is what makes it great at essays). It probably has some knowledge about poker, some knowledge about meerkats, and connects the two in a manner that is normal for human joke form.

1

GlassAmazing4219 OP t1_j554mig wrote

It’s not copying, think more along the lines of text prediction, but instead of your chat history as a model, use the internet.

1

Techmite t1_j556qzp wrote

It's not connected to the internet to learn from, for good reason (usually to avoid bias information). Its given data sets from pre-made groups that are carefully chosen by humans.

1

voyyful t1_j554qis wrote

Funny thing is it remembers previous questions and answers, so you can actually ask it if it is novel. I asked it to invent a recipe for cookies. It sounded too good to be true so I asked it where it got it from. It could be lying though, which would be really scarry.

1