This topic is very underrated in sciences at the moment. Probably one of the most important as it's the whole point of science.
@loganv04106 жыл бұрын
As a minor history note: carrots and improved night vision was a cover story from WWII to cover the deployment of radar
@BugRib6 жыл бұрын
Honestly, I think this has been an open secret in the social sciences for quite some time. Even a layperson like me can often read about studies done in psychology for instance, and see that the methodology used was sorely lacking-especially compared to the certainty expressed about the results.
@tobiasvanbeekhuizen90186 жыл бұрын
Hey, I have almost finished my major in methodology and statistics in behavioral and medical sciences... and indeed methodology is often lacking in social science research. However let me assure you that statistical knowledge among biomedical and biology students, to name a few, is often even worse (I was surprised to find out). Luckily statistical training is starting to become increasingly important in most of the universities, at least in the Netherlands. And I think most progress will start with improved education.
@TommoCarroll6 жыл бұрын
I agree - the more repetitions we can feasibly do the better. It's the basis of good science in my opinion! Any one else?
@TommoCarroll6 жыл бұрын
Flaming Basketball Club Yeah for sure! That's what my channel is all about! Well, science mainly! Why do you ask?
@AbCDef-zs6uj5 жыл бұрын
I disagree. If a study cannot be reproduced, it means that the original was extra lucky -- and that isn't something you should take lightly and not appreciate. Studies with irreproducible results are like diamonds: everything gains value on the basis it being rare and special. Replicating scientific results would be like turning your previous diamonds into cruddy lumps of coal, and you wouldn't want that, would you?
@km1dash66 жыл бұрын
A big problem with cutting the alpha level from 0.05 to 0.005 is that it makes the likelyhood of a type II error (false negative) more likely.
@Petch856 жыл бұрын
Could you add the links to sources in the descriptions for the studies you references in the videos? This would make et much easier to find them. Also it would be a good practice :-)
@THEMithrandir096 жыл бұрын
If every respectable journal would have those who publish reproduce another study before theirs is considered this problem would go away quickly
@deebmonkey236 жыл бұрын
Possibly one of the biggest and bestest Crash Course episodes of all time. Dabs and applause.
@frankschneider61566 жыл бұрын
The prime problem is publish or perish. If you just pay a scientist (=prolong the grant) if he publishes, thus if he has to publish to not starve, so he will publish no matter what ... that's exactly as intended. As unfortunately scientific discoveries can't be forced and are often just luck, but the scientist has to publish something ... anything to not end up as a hobo, so he will find something ... anything to publish. If not hell make something up if necessary. Bending the truth is bad .. being without a place to live is worse, so guess what people do ? What do you expect ? Ruining your life. just to be honest ? Seriously .. just drop the measurement that doesn't fit the intended explanation and everything will be fine. That's not even lying or faking data, it's giving people what they want.. The same, but even worse is true for PhD students. Without a PhD you are academically dead meat and can think about driving a taxi for Uber or working as callboy in the future to not starve. To avoid these unpleasant alternatives you'll do anything to find something, somehow that's publishable. If it is reproducible or relevant is completely irrelevant, because you need the money to not starve and pay the rent. In the end it works exactly as it was intended: financial incentives (=not starving) lead to an increase in the number of papers (= formally scientific progress) and everyone is happy. The scientists don't starve the people founding it get their papers, so everything is dandy, well until they find out 20 years later, that 90% of all the stuff published ain't worth the paper it used to be printed on because it is scientific junk (and that's not only true for pseudo "sciences" like social "sciences", but also for hard natural sciences). In economics such unintended outcome due to wrongly set incentives is called cobra effect. The real problem here is that scientific progress can't be forced. If one tries to force people to do something that they can't really influence, they'll find another way to somehow comply with the requirements... and the result is scientific junk. As long as the incentives are set towards mass production of papers and not towards producing real scientific research, this won't change. As a result does every scientist know that a newly published paper ain't worth anything until it has been reproduced several times by independent (=not affiliated) labs. The major problem here is not science ... the problem is people who don't understand science, ruining it by thinking that something unknown and never done before can be solved by just throwing money at the problem and putting pressure on people. That's NOT how science works. That's how paper production works.
@AdamantRecluse5 жыл бұрын
Frank Schneider Any chance this is something you’ve studied in some depth or perhaps firsthand? I’m currently working on a research project around this whole mess-from how we ended up here to the whole human nature/incentives side of it-and found your comment aligned well with my present take on the matter while providing new insight and angles I hadn’t considered.
@bakunaut62556 жыл бұрын
It's also important to point out that when this is studied, different areas (of say psychology) have different replicabilities. So cognitive psychology does substantially better than social psychology, for example.
@Mister.Psychology4 жыл бұрын
I was just whining about her explaining a stat with that slow walking experiment in a former video. And here she is disputing the study she used as an example. Good.
@teunvandenbrand13246 жыл бұрын
A point missed in this video is that of publication bias. Rejections of null hypotheses are more likely to get published than failure to reject null hypotheses. Finding no statistically significant effects is not very interesting reading material, but might help prevent other researchers wasting time and money on the same fruitless hypothesis.
@jiholee52374 жыл бұрын
She is by far the best in presenting seemingly boring subject to the widest group of audiences. Thank you for making this freely available!
@mariksen Жыл бұрын
THANK YOU SO MUCH for explaining such a complicated topic in a simple and understandable manner and in less than a quarter!! 😘😘😘😘😘
@Brainstorm696 жыл бұрын
Great topic choice. In Neuroscience and psychology a lot of good things are happening to increase reproducibility. But it's important to remember that a lack of replicability does not immediately something is wrong. It's hard to do everything exactly the same way.
@Desrtfox716 жыл бұрын
It does mean something is wrong. Reproducibility is a core element of the scientific process. yes it's hard to do everything the same, that's why all data should be open, and the methodology has to be very accurately described and recorded.
@irwainnornossa46056 жыл бұрын
I see it little bit differently. The problem is how people think about published results. It should be something „Hmmm, there might be something here.“. And only those studies, which have been replicated and re-tested and analyzed to death, would get published. Just like SW developement. When you first create something, it's buggy.
@MusicalRaichu5 жыл бұрын
the problem is that journals don't like replication studies because they're not novel. a published paper is, supposedly, new research, not something done before.
@4Goalkeepers6 жыл бұрын
An absolute gem. Thank you for this true Education!
@jessephillips12336 жыл бұрын
I wish there was a journal dedicated to re-running experiments. It would only accept papers that were expressly attempting to reproduce papers that had already been published somewhere. Ideally there would even be funding grants awarded to researchers reproducing results.
@unleashingpotential-psycho94336 жыл бұрын
I am going to start using the power pose at work and see what happens.
@abcabc-uv6ce6 жыл бұрын
The importance of understanding math should be pointed out a lot more and be taken more serious .
@victorjozwicki81796 жыл бұрын
12:22 Did the animation just dab
@JEOGRAPHYSongs6 жыл бұрын
♫ We need to replicate ♫ We need to deliberate! ♫ We see data repeat ♫ It is such a treat! ♫ To see science at work ♫ That is just one perk ♫ Of replication ♫ I feel elation! ♫♫
@lystic93925 жыл бұрын
"No single study is going to show us the way the world REALLY is. But that study and the studies that follow it that do and don't find the same relationships will get us closer and closer." Well said. Though if there is a political or intellectual dogma/homogeneity in universities and work environments it could be that doing more studies will only lead to further misdirection. I do think looking critically at the level of diversity of thought and political interest is going to be part of a solution to improve the accuracy of studies. I also see a point raised repeatedly in comments here and elsewhere, that it doesn't matter what you publish as a scientist as long as you can keep publishing. I have no experience with this, but if true, that doesn't sound like a system that inspires honesty in science. Maybe there is something to be gained by taking a closer look at how scientists keep their jobs. However it may be, this was a well put together, thought inspiring video.
@learninglockit6 жыл бұрын
Is the replication crisis something that occurs intentionally or more than likely from simple human error? Also, one may consider whether or not a replication crisis is subconsciously the result of a scientist attempting to achieve results that they already have perceived as true.
@Hahahahaaahaahaa6 жыл бұрын
One major issue that has always been present and widely known forever is that is much easier to publish positive results then negative results. 20 different scientist run the same study and one of them gets results by random chance all of a sudden they're publish results look conclusive. But especially for something like psychology a .05 confidence level could be fine. There's also the pressure to not try to replicate your own findings because then you're basically losing Publications for yourself
@stevieinselby6 жыл бұрын
So the survey found that lots of people thought there was a problem with replication. But when they ran the survey again, they couldn't get it to show there was a problem. Meh. ;-)
@dijonmoutard66474 жыл бұрын
New survey indicates that surveys are meaningless.
@KommentarSpaltenKrieger5 жыл бұрын
If we are speaking about the "Study finds: People who drink beer are smarter than people who drink wine" type of research, than I'm not baffled. But still, this is a huge problem. Maybe there needs to be a different vetting process or some sort of re-shaped peer review in which reproducibility is required. This would vet out all of those studies which everybody seems to buzz about. Quality could win against quantity.
@simplisummarizify6 жыл бұрын
You define replicability and reproducibility clearly in the start, but from minute 5 on, you use "reproducible" when you mean (according to your own definition) "replicable".
@drmcgene6 жыл бұрын
I think it’s not intentional and, although simple human error does occur, it is not simply a mistake. The dogma we live by is “ publish or perish”. To be tenured or promoted or get funding you have to have a steady output of papers. So, in many labs there is the concept of the LPU, or least publishable unit. The smallest amount of data for a publication is probably just a couple of p values less than 0.05. Also, in animal studies, there is a commitment to using the smallest number of animals that provide the statistical power necessary to generate a statistically significant result. To demand that all animal studies be replicated will require a societal consensus that we are going to probably triple the number of animals used in experiments. In other fields funding increases will be required.
@frankschneider61566 жыл бұрын
Exactly .. if you force your PhD students to make a decison: taxing driving for uber for life or optimizing the data so that thy look better .. guess what they'll do.
@valchonikolov2666 жыл бұрын
Hey CrushCourse, I am a fan of your channel and i had one idea. I would really love if you could put recommended literature to the courses. Or at least for Statistics.
@Ms1robby5 жыл бұрын
Peer review and subsequent publication also need to be buttressed. Many poorly constructed studies with indifferent support are nonetheless positively reviewed and published simply because they seem topical, shocking and, in short, make great leading articles. Having read many poor papers I've come to believe that evidentiary standards should be most stringent for new and unexpected results. One bad paper can be the pillar for 100 even worse ones. Finally, fewer bad papers would be written in the first place if we remember Carl Popper. Theories can only be DIS-PROVEN! Theories should be excepted only after two conditions are met: the theory can be dis-proven in principle and has survived all attempted and conceived falsifications. We need fewer and better papers.
@flamenco1086 жыл бұрын
Thank you for this crashcourse. It was really mind-opening piece of statistics ;-)
@aNytmare6 жыл бұрын
DFTBQA! Thank you Adriene! You are awesome!
@AndroidOO36 жыл бұрын
How about contested studies. 2 researchers do the same test in isolation at the same time then present their findings and debate eachother to come up with third a combined test.
@AndroidOO36 жыл бұрын
NoisyCricket42 It attempts to fix the analytical problems caused by the wide diversity of subjective analysis through a debate towards that goal to create a third test that is superior to the other toe. It doesn't fix flukes but it gives us a higher grade of analysis through reflection and interaction.
@rich83045 жыл бұрын
The real issue is the political use and abuse of this problem.A brief study of how our early sailing explorers were able to find themselves longitudinally is a great read and should make everyone sceptical of anything they hear, read or even witness first hand.
@AndroidOO36 жыл бұрын
Not all scientists are good scientists, a lesson that nobody learns.
@TommoCarroll6 жыл бұрын
Unfortunately it's something that people should be more aware of. I've just got a degree in science, I don't practise science (just communicate it nowadays via the y'tubezzz) but the amount of times I see people talking about adjusting the experiment to get the results they want or need in order to qualify for something infuriates me! _getting 'bad' results is part of GOOD science, collect data on everything_
@acetate9095 жыл бұрын
This is really a psychological study problem. The various hard science fields aren't having this issue to this degree.
@acetate9095 жыл бұрын
@@TommoCarroll I agree though. Bad science containing false data is worth than no science. Document all studies, even the ones that seem to fail, and utilize the entire spectrum of research data to make scientific progress.
@pozzowon5 жыл бұрын
What is DFTBAQ???? Don't forget to be asking questions?
@Vliegendehuiskat6 жыл бұрын
DFTBAQ? I don't know that one.... Anyone here to enlighten me?
@Tuckems6 жыл бұрын
Vliegendehuiskat Don’t forget to be asking questions, or something like that
@tenou2136 жыл бұрын
Yes, statistical reliability instead of significance needs to be emphasized. Luckily we can now ask these questions even with the knowledge that we will get a lot of pushback. First off: Vitamin C does nothing for your cold.
@lorettap.9254 жыл бұрын
The heck is even "power posing"?
@anakagung76134 жыл бұрын
Kenneth Rothman already dismiss using p-value
@6thwilbury23316 жыл бұрын
My favorite scientist makes a cameo at 3:08.
@robberyyyyy4 жыл бұрын
how come their browser is in dutch ?
@lofatmo6 жыл бұрын
@Crashcourse where is the data on the red cards study? I've been looking for more data on this for years!
@brianh27716 жыл бұрын
“Power Poses”? Geez, the bar on what can be called “Science” is pretty damn low these days.
@frankschneider61566 жыл бұрын
Brian H At least that has a proper solid foundation in behavioral biology. Body language IS an important fact in social behavior. But it gets a lot worse when people talk about "power ties". Seriously.
@EMES3656 жыл бұрын
I wonder if people take into account that darker skin brings more attention. It makes one look stronger, more aggressive and tone. Hence body builders tanning to look more cut and draw attention to their body definition. My the things you think about when your board...
@armorsmith436 жыл бұрын
Carrots only help you you see Messerschmitts.
@seaxneat91546 жыл бұрын
I promise u power posing is BS
@Eirikr4304286 жыл бұрын
Stand up straight with your shoulders back.
@mikewilliams60254 жыл бұрын
The major reason that psychology, in particular, is having a reproducibility problem is because psychology, in particular, isn't science.
@TriaMaxwell6 жыл бұрын
I thought the title was "The Republican Crisis"
@lordshell5 жыл бұрын
Let's be accurate here. We're talking government-funded, academic science has a replication crisis.
@marcopolo23956 жыл бұрын
this series goes awful
@peterbard6 жыл бұрын
It seems like you're conflating statistics and science i don't see them as equivalent
@heris60916 жыл бұрын
It's related, in psychology and other social studies use statistics to measure effect of behavior or 'other things' to human or whatever related to it :v. If you don't believe me, you can read journal from science direct or other social science journal, and find many of them use statistic to justified their research result.