A few years ago we submitted a paper based on a long, arduous, and painstaking clinical research project. The finding was somewhat unexpected and, we thought, very important. When it was reviewed one of the reviewers said "this just can't be true" and for that reason the study was rejected. After that it became clear to me that in my field there were findings that were acceptable a priori and that other findings would be rejected. Modern social science has the appearance of the pursuit of truth, but the reality is something far more human.
@victor_venema6 жыл бұрын
If you have theory and expect a relationship also finding "no result" is interesting.
@epickiller30 Жыл бұрын
This seems to ignore the more nefarious, and honestly more realistic reason, why publishing bias exists, which is to not publish studies that show outcomes undesirable relative to the consensus.
@gavinmc52854 жыл бұрын
Maybe there's a gap in the market for a 'Journal of Null Hypothoses'
@_human_1946 Жыл бұрын
Isn't that something you can use Arxiv or Biorxiv for?
@oceanwayne7296 Жыл бұрын
Part of the problem has been identified as Peer Review Rings with vested interest in promoting each other’s research no matter if the concept is valid or not . True or not true . The root cause of this is a lack of moral values within academia.
@schottilie3 жыл бұрын
please check your slide at 6:50. specify whether hypothesis means null or alternative hypothesis. and depending on your choice, you should probably switch the two illustrations.
@russellhawkins940 Жыл бұрын
"Online only" publication for less exciting news.
@yesmsg4297 жыл бұрын
Question: How do you know only 1 in 400 have the disease. The test would give you 4 people, so there has to be a second test to prove 3/4 were false positives. Correct? Wouldn't that therefore become the test, or a two pronged test, to prove 1:400? I know you are only making a simplistic point, but do you understand the question?
@carlbergstrom90717 жыл бұрын
If I understand your question correctly: in medicine tests of this sort will typically be initial screenings. There are often more accurate (and expensive or invasive) follow up tests than can be used for more accurate diagnosis. These are used to determine prevalence.
@schottilie3 жыл бұрын
haha, this lecture is a nice piece of b...sh..! it starts with the question, why so many scientific findings can't be replicated and then answers the completely different question why it's virtually impossible to infer the truth from a whole bunch of publications! replicability is a problem of individual studies, while publication bias is about the total (or a relevant share) of publications addressing a specific topic. ioannidis' argument explains low replicability, that's the true culprit here (together with sloppily executed or reported studies etc.)!
@zuhaz33932 жыл бұрын
Doesn't the connection make sense? Positive studies are more likely to be published in journals, and a lot of them don't replicate, because many of them tend to be flukes that got published due to publication bias... Or maybe you mean that the problem is that all the negative replications don't get published?