Viewing a single comment thread. View all comments

JaggedMetalOs t1_iwwuox3 wrote

I really have no idea why anyone would think an AI language model trained on scientific papers would do anything other than make up fake scientific papers.

203

kolitics t1_iwxf9ie wrote

Perhaps the real test was whether they would be identifiable as fake.

51

HumanSeeing t1_iwyn6tu wrote

Or.. perhaps.. the real test was the friends we made along the way!

21

gbot1234 t1_ix00g4o wrote

I even found a significant other (p<0.05)!

1

stage_directions t1_iwz7o73 wrote

Yes, they would, when people tried to reproduce or build upon the science and shit didn’t work. That’s how science works.

3

juxtoppose t1_ix0otqu wrote

That’s the way it should work but published papers is no guarantee of accuracy. In fact that’s wrong it IS the way science works but scientists are people and people are corrupt and often wrong. AI just as likely to be wrong, so far...

2

stage_directions t1_ix0pp34 wrote

I’m a scientist. Depending on the field, it’s not that hard to tell.

1

twasjc t1_ix1unuz wrote

I think it's the wrong idea to have it write papers.

Rather it should strip the fluff like gematrix.org but for science papers.

Then start grouping associated data points for processing and have the AI try to connect the dots between related data points.

Basically treat the stripped data points as fractals and test inbetween points to see if anything checks out. With a proper variance rate this should be something that could rapidly improve

2

frequenttimetraveler t1_iwy516m wrote

nobody said they wouldn't

The galactica.org website had a prominent disclaimer in every page that the content is INACCURATE. But some scientists are so stupid they can't read

−21