Viewing a single comment thread. View all comments

User1539 t1_jcyase7 wrote

Did you read the article, though?

"quote from content created by ChatGPT in their essays"

They're allowed to use it as a source, not to write an entire essay.

20

SirEblingMis t1_jcybhf3 wrote

Yes, but that's still wild to me since chatgpt can make shit up and itself won't cite where it came from. It is a language model based on internet data.

Where it gets the data for what they'll cite is the issue, and something I can imagine as presenting a problem.

When we read other papers or articles, there's always a Bibliography you can use to go check out what they based their thoughts on.

19

User1539 t1_jcyh91j wrote

I think it can cite sources if you ask it to, or at least it can find supporting data to back up its claims.

That said, my personal experience with ChatGPT was like working with a student who's highly motivated and very fast, but only copying off other people's work without any real understanding.

So, for instance, I'd ask it to code something ... and the code would compile and be 90% right, but Chat GPT would confidently state 'I'm opening port 80', even though the code was clearly opening port 8080, which is extremely common in example code.

So, you could tell it was copying a common pattern, without really understanding what it was doing.

It's still useful, but it's not 'intelligent', so yeah ... you'd better check those sources before you believe anything ChatGPT says.

3

ErikaFoxelot t1_jcyjuvn wrote

GPT4 is a little better about this, but where it excels the most is when used as a partner, rather than a replacement. You still have to know what you're doing to effectively use what it gives you.

8

User1539 t1_jcz0uft wrote

Yeah, I've definitely found that in coding. It does work at the level of a very fast and reasonably competent junior coder. But, it doesn't 'understand' what it's doing, like it's just copying what looks right off stack overflow and gluing it all together.

Which, if I need a straight forward function written might be useful, but it's not going to design applications you'd want to work with in its current state.

Of course, in a few weeks we'll be talking about GPT5 and who even knows what that'll look like?

4

magnets-are-magic t1_jczs8oe wrote

It makes up sources even when you explicitly tell it not to. I’ve tried a variety of approaches and it’s unavoidable in my experience. It will make up authors, book/article/paper titles, dates, statistics, content, etc - it will make all of them up and will confidently tell you that they’re real and accurate.

2

User1539 t1_jczslss wrote

yeah, that reminds me of when it confidently told me what the code it produced did ... but it wasn't right.

it's kind of weird when you can't say 'No, can't you read what you just produced? That's not what that does at all!'

1

visarga t1_jd0akyj wrote

This is an artefact of RLHF. The model comes out well calibrated after pre-training, but the final stage of training breaks that calibration.

https://i.imgur.com/zlXRnB6.png

Explained by one of the lead authors of GPT4, Ilya Sutskever - https://www.youtube.com/watch?v=SjhIlw3Iffs&t=1072s

Ilya invites us to "find out" if we can quickly surpass the hallucination phase, maybe this year we will see his work pan out.

1

Ricky_Rollin t1_jczr3hg wrote

In many ways, it’s just advanced google. I am in a specialty field and have published some thing that was repeated word for word as I wrote it when I asked CGPT about the topic.

1

User1539 t1_jczs306 wrote

Yeah, in how people can use it, that's definitely a good description and I've been asking google straight up questions for years already.

I do think it's changing the game for a lot of things, like how customer service bots are going to be actually good now.

1

ground__contro1 t1_jd17jet wrote

Btw it’s a terrible source. It can easily be wrong about established facts. Last week it tried to tell me Thomas Digges posited the existence of alien life. Digges is a pretty early astronomer when the church was dominant so that really surprised me. When I questioned it again, it “corrected” itself and apologized… which, great, but if I hadn’t already known enough about Digges to be suspicious, I would have accepted it in the list of all the other (correct) information.

Chatgpt is awesome, but it’s no more a source than Wikipedia, in fact it’s potentially worse because you don’t have anyone fact checking what chatgpt says to you in real time, whereas there is a chance others will have corrected wiki pages by the time you read them.

1

User1539 t1_jd2la65 wrote

oh, yeah, I've played with it for coding and it told me it did things it did not do, and couldn't read the code it produced after, so there's no good way to 'correct' it.

It spits out lots of 'work', but it's not always accurate and people who are used to computers always being correct are going to have to get used to the fact that this is really more like having a personal assistant.

Sure, they're reasonably bright and eager, but sometimes wrong.

I don't think GPT is leading directly to AGI, or anything, but a tool like this, even when sometimes wrong, is still going to be an extremely powerful tool.

When you see GPT passing law exams and things like that, you can see it's not getting perfect scores, but it's still probably more likely to get you the right example of case law than a first year paralegal, and it does it instantly.

Also, in 4 months, it's basically become accurate the way you'd expect a human to improve on things like the bar exam in 4 years of study.

It's a different kind of computing platform, and people don't know quite how to take it yet. Especially people used to the idea that computers never make mistakes.

2