Viewing a single comment thread. View all comments

shumpitostick t1_ivlg54a wrote

My take on the replication crisis is that it is something 60% bad incentives, 35% bad statistics and 5% malice. Bad incentives is the whole journal system, which incentivizes getting good results and does not deeply scrutinize methodology and source data, the lack of incentives for preregistration, poor quality journals existing, etc.

Bad statistics is mostly the fact that people interpret p<0.05 as true and p>0.05 as worthless results and use it as a threshold for publishing, rather than the crude statistical tool that it really is. Plus just a general bad understanding of statistics by most social scientists. I'm currently doing some research in causal inference, developing methodology that can be used in social science, and it's embarrassing how slow social scientists are in using tools from causal inference. In economics applications are usually 10-20 years behind the research but in psychology for example they often don't even attempt any kind of causal identification but then suggest that their studies somehow show causality.

Malice is scientists just outright faking data or cherry-picking. But even that is tied to the incentive structure. We should normalize publishing negative results

2

OceanoNox t1_ivmhma8 wrote

Thank you for your insight. I am in material engineering, and I emphasize having representative data, but I have heard at conferences that the results shown are sometimes the top outliers, outside of the average.

I completely agree about the publication of negative results. Many times I have wondered how many people have tried the same idea, only to find out it didn't work and did not or could not publish it. And thus another team will spend effort and money because nothing was ever reported.

1