Viewing a single comment thread. View all comments

JaggedMetalOs t1_iwwuox3 wrote

I really have no idea why anyone would think an AI language model trained on scientific papers would do anything other than make up fake scientific papers.


kolitics t1_iwxf9ie wrote

Perhaps the real test was whether they would be identifiable as fake.


HumanSeeing t1_iwyn6tu wrote

Or.. perhaps.. the real test was the friends we made along the way!


gbot1234 t1_ix00g4o wrote

I even found a significant other (p<0.05)!


stage_directions t1_iwz7o73 wrote

Yes, they would, when people tried to reproduce or build upon the science and shit didn’t work. That’s how science works.


juxtoppose t1_ix0otqu wrote

That’s the way it should work but published papers is no guarantee of accuracy. In fact that’s wrong it IS the way science works but scientists are people and people are corrupt and often wrong. AI just as likely to be wrong, so far...


stage_directions t1_ix0pp34 wrote

I’m a scientist. Depending on the field, it’s not that hard to tell.


twasjc t1_ix1unuz wrote

I think it's the wrong idea to have it write papers.

Rather it should strip the fluff like but for science papers.

Then start grouping associated data points for processing and have the AI try to connect the dots between related data points.

Basically treat the stripped data points as fractals and test inbetween points to see if anything checks out. With a proper variance rate this should be something that could rapidly improve


frequenttimetraveler t1_iwy516m wrote

nobody said they wouldn't

The website had a prominent disclaimer in every page that the content is INACCURATE. But some scientists are so stupid they can't read