Viewing a single comment thread. View all comments

gudamor t1_j1n40eq wrote

"we tested 20 things, but only this one was significant at p<0.05"?

199

chance909 t1_j1oiyl9 wrote

From the statistics textbook:

"With 20 tests being considered at a p-value of 0.05, we have a 64% chance of one test being significant even if all tests are actually non-significant"

If you have multiple testing, you need a correction like Bonferroni or FDR to avoid this issue.

72

potatoaster t1_j1o4xis wrote

Wow, you're right. I read the paper fully expecting them to write something about multiple comparisons but nope, there's nothing at all. And the p value that made it through that barrier of probability was .02. Naturally.

70

popplesan t1_j1pegew wrote

It’s an MDPI paper, can’t expect much. I read a paper in one of their journals that explicitly described p-hacking in the methods. The reviewer’s comments were public, except they had no comments. It was mind boggling. Then I saw that the advisor had 100+ papers in sketchy journals, was at a fairly weak university and decided to Google if MDPI was a conglomerate of fake/predatory journals. They’re in a gray area for sure. Some of their journals are laughable, and some are pretty decent. But as a rule of thumb, if I see MDPI I bust out the fine-toothed comb even from their good journals.

12

B_lintu t1_j1qo7gn wrote

Wow that's so effed up. I was recently reading one of their economics journals with 3.9 impact-factor and I would never think such things would slip in this kind of journal, I though it was quite decent.

1

cantdecide23 t1_j1ozg76 wrote

As a volleyball player this kind of confuses me too, in the sense that they say it improves attacking, but then proceed to list off all the components that make up an attack (running to position, jumping, hitting, etc) as things that are unaffected.

9