It would be great if there is like a 'null-journal' which only publishes negative results. It might make it more acceptable to publish those studies.
@amrojjeh Жыл бұрын
I like that idea. But journals should still try to be more specialized, since even negative studies should be reviewed as normal
@Ganntrey Жыл бұрын
I cannot agree more. NEGATIVE RESULTS ARE RESULTS!!!
@SimplyWondering Жыл бұрын
@@Richrichy Except journals already deal with the quality of studies. Null results arent bad just because their null, and if the journal highlights good studies that have null results it would be an interesting read.
@Coz131 Жыл бұрын
@@Richrichy Journals can choose good quality studies you know....
@haldanesghost Жыл бұрын
I had an inside joke with some people in my lab back in the day that I wanted to start up a journal called “The Journal of Failed Experiments and Bad Ideas”.
@markbayer1683 Жыл бұрын
One of the things we teach in Organizational Ethics is that you (managers) shouldn't put your people in situations where they are incentivized and/or tempted to cheat. If the situation is strong enough, they will behave unethically. The situation that top level academics have been in - for decades - is giving them strong, strong incentives to perform research fraudulently. The rewards for publication are far too tempting, and the checks and balances on how they produce data are far too weak. Given the situation academic leaders have put them in, we should not be surprised that these unethical behaviors are happening. And they are undoubtedly still happening. Gino is merely the highest status tip of what is probably a large iceberg. Probably.
@DerekCroxtonWestphalia Жыл бұрын
That is not, however, a realistic approach to academia. Pro athletes also have a huge incentive to cheat because the better you are, the more you get paid (and, really, even moreso amateur athletes, because just getting to the pros is the biggest pay bump you will ever get). Unless we make academia a low-status profession with small salaries, or stop rewarding the professors who produce the best results, the incentives to cheat will be there. What we need is a way to catch them or preempt the cheating.
@meneldal Жыл бұрын
I wouldn't say that the reward are strong, more like if you don't have a good paper every x months you better have tenure or you get fired. And when you've been stuck in academia and have a skillset not very useful for work outside, I get cheating to save your own job.
@masterdecats6418 Жыл бұрын
They should’ve had everyone in the field sign at the top of the page for honesties sake when they publish lmao. Fake field. Fake results. Don’t treat your employees like shit, and they won’t rebel against you.
@TheThreatenedSwan Жыл бұрын
Where is such a mechanism, and if it were enforced, why would the semantics of "cheat" not just change
@JavaScripting64 Жыл бұрын
“Probably”
@runningwithsimon Жыл бұрын
PhD in biomedical research here, but I left academia some time ago. There is hope, but let's not deny that there is a huge problem across all fields that goes much beyond Gino. I have tons of respect for some of my peers that stayed, but they themselves would be the first to admit it's a mine field. It's usually recognized that ~30% of publications will have major inconsistencies (i.e., something that can't be replicated independently, or even in the same lab by a different researcher). That may seem like a lot, but I'm sure it's similar in other fields - the biggest difference being that it's easier to replicate the exact same thing, and therefore find inconsistencies (vs. for example behavior science - not attacking it, just saying that not spotting the errors doesn't mean they aren't there). One would think that big publications from famous lab in prestigious journal would be immune to that - but it's the opposite. Why would you lie to publish in the Icelandic Journal of Whatever, vs. publishing in Nature or Science? It's NOT all fraud however, but it'd be naïve to think fraud is not a major factor. I think anyone working in big lab have seen some suspicious post-doc with results that are just TOO clean - you can't prove anything, but you suspect something is off and avoid collaborating with them. IMO there are two big issue in academia - the strong pervasive incentives (if you want to stay in academia, you NEED that high impact factor to have grants and have a chair, etc). But even PhD candidates that want out - you need to publish, otherwise you'll stay forever. That doesn't necessarily mean fraud, but it could mean cutting corner and sloppy research. I think the incentive issue is the biggest in field where you have limited career option beyond academia. And even bigger in smaller field where nobody can or will replicate the exact same experiment. The second biggest issue is often lack of supervision, and that most research is done by basically noobs. How often did my professor come down to the lab to teach me during my PhD? ... Never! Who designed, ran, analyzed, and wrote papers? ... Me. Sure we discussed, but at the end of the day, you are driving your research mostly independently. How many years of research experience did I have when I started being independent? ...Not much. When you need to learn a new lab technique, you'll be directed to a lab mate who has at most 2-4 years experience, and is most certainly not an expert on this technique. Heck, I've been labelled "expert" on some techniques I had barely done twice. Experiments can be poorly designed, and/or poorly executed despite good intention. And peer review? ... Please give me a break! I have yet to receive a good critic of my work through that - lab discussion with other labmate and advisor were 1000x more helpful. But have their limitations. Plus, come on - who here hasn't been asked to peer review a paper on behalf of someone else?
@whycantiremainanonymous8091 Жыл бұрын
And that postdoc with *too* clean results ends up becoming a bigshot professor, while those who stayed away are out of academia... About fields where there's no employment market outside academia, well, much depends on the specifics of the field. You don't find data fraud in the humanities, because there's no data analysis. There's plenty of plain old BS, and of professors using peer review to push rubish work by their cronies and block rival schools of thought, but that's not fraud, really.
@philkim8297 Жыл бұрын
The whole system sounds so flawed and in need of a major revam
@whycantiremainanonymous8091 Жыл бұрын
You know, I keep coming back to that megastudy, and it keeps irking me. It reminds me of all the times I read a paper in Behavioral Science (henceforth referred to by the acronym BS), and had a strong feeling that in this field, somebody decided that quantitative methods should _replace,_ not supplement, good common sense. In this very comment section, ordinary people with plain common sense pointed out several potential problems with the design: the test of messages with humor might have had no effect because the joke is lame (and unclear too); the message "Your vaccine is waiting for you" might have been effective not because of the implied ownership, but because of the implication that this is a time-limited invitation. And if I know anything about BS literature, all such objections will be flatly ignored, and the "findings" reported as scientific truth. So, in the end, the main thing we "gained" from this megastudy is PR for Wallmart. This is just a bastardization of true science. Fraud is the least of your problems, folks.
@mxvega1097 Жыл бұрын
Exactly. Without open data, it is just a very big closed study. How to apply the replication / reproducibility function to it? Another researcher will have to do another study with 689,000 results? To what end? To study and learn, or to provide functional data for a health campaign which has already concluded its "right answer"?
@gaerekxenos Жыл бұрын
Agreed. The testing premise/methods are not good for a number of them. The logic just isn't there for a number of things. Humor that isn't appropriate for the situation isn't typically appreciated by certain people. Vaccinations are one of those things where the humor might not be appreciated; however, applications for something like University Deadlines are something where a bit of humor can actually be strongly appreciated. Hell, one of the reasons I am considering applying to a certain place for Graduate school that I ended up never finishing my application for Undergrad years ago was because they sent witty postcards with very punny reminders for completing my portfolio for submission back when I was applying for Undergrad. If you sent me a joke for vaccinations, I am either not going to take you very seriously or just treat it as if it were any other ordinary reminder. Some additional things with why "your vaccine is waiting for you" worked are things like the assumption that "oh, there was work that was done to make it simpler for me to go in and grab it and be done," or to guilt-trip people with "this is now labeled as yours and if you don't use it, then it will just sit there and be wasted." I didn't even think about the "time-limited invitation" aspect until you mentioned it
@brhelm Жыл бұрын
In molecular biology, there is an increasingly common requirement/standard to make the direct outputs of various collection devices directly available to the public (i.e. "raw data"), including sequences (DNA/RNA sequencing), microarray outputs, flow reports (FACS, etc), and qPCR raw data. Most of these are fairly standardized or follow just a couple of standards throughout the research community. If there is some kind of tampering suspected, then other researchers can go directly to the raw data and attempt to recreate the analysis from scratch. I'm surprised behavioral sciences haven't required that papers include data deposition of the equivalent raw data (surveys, brain scan data, etc). It doesn't completely eliminate the potential for fraud, but ALOT of fraud is conducted in the "analysis" part of the science--especially where that raw data may be impossible to go back to and/or recreate because of various limitations in collecting the data.
@fallenangel8785 Жыл бұрын
In addition to pre-registration from the side of researchers , journals should make their initial acceptance for papers based on this preregistration ( i.e : research idea , study design ) not based on the results
@bram5683 Жыл бұрын
This is actually starting to become available; there are now quite a lot of journals in various fields that offer 'registered reports' - the study design gets reviewed first and the results will be published (in principle) regardless of outcome
@fallenangel8785 Жыл бұрын
@@bram5683 can you provide me with some examples?
@bram5683 Жыл бұрын
@@fallenangel8785 Ah sure; actually I just noticed even Nature has them now (see their editorial on preregistered reports from February 22nd of this year). But the Center for Open Science has a list of journals on their site. I don't think I can put a link here, but you'll find it if you search for registered reports cos / participating journals
@ohnenamen0992 Жыл бұрын
THIS! In Psychology and I would believe in other fields as well, there is a huge publication bias. This could only be solved if the journals accept papers based on the design rather than the results.
@niekverlaan7227 Жыл бұрын
I came to the comment section to say exactly this. I mean, if you're working for a popular glossy magazine, I understand that you like to publish the most spectacular (sounding) articles. But in science, the result shouldn't count for wether the article is published or not.
@luszczi Жыл бұрын
"Have you heard the one about the flu? Don't spread it around!". I think the effectiveness of this joke might have been moderated by its funniness.
@whycantiremainanonymous8091 Жыл бұрын
On multiple authors, recall Gino's retracted studies had quite a few authors. In one case, there appear to have been at least two different fraudulent studies (by Gino and by Arieli) were included in one paper. Multiple sets of eyes, on multiple sets of fraudulent data...
@whycantiremainanonymous8091 Жыл бұрын
On megastudies, don't we run a strong risk of false positives with these? If you test for 40 hypotheses, on average two will give "significant" results (at p
@clankb2o5 Жыл бұрын
That's why they needed a massive sample size. They took it into account.
@whycantiremainanonymous8091 Жыл бұрын
@@clankb2o5 Isn't p
@clankb2o5 Жыл бұрын
@@whycantiremainanonymous8091 I should have been more clear. I do not believe that an absolutely huge team of researchers would forget something as basic as a Bonferroni correction. They must have ensured the statistical validity. My (dare I say reasonable) assumption is that the lower p-values that they used required a larger sample, that is why she brings up the extraordinarily large sample size. Because of course the effect size doesn't change. And no, p
@billscott1601 Жыл бұрын
Aren’t all papers peer reviewed, when my wife, an MD, published her papers they were all peer reviewed. She frequently review papers published by others in her field.
@8cyl6speed Жыл бұрын
I guess not well enough
@Heyu7her3 Жыл бұрын
@@HerrRotfuchs that + if you're prominent in the field/ your field is novel, your peers are your friends & can be easier to identify a paper even in a double-blind process.
@TomJakobW Жыл бұрын
@@HerrRotfuchs and even if you get access to the original data (which is happening more and more), you can only check it. In essence: peer review is desk work, not lab work - so it’s not the end-all-be-all. Actual “knowledge” is only formed through a rigoros, arduous process involving research, review, discussion, replication, model forming, predictions, research and so on. People are naive and somewhat gullible about novel research, but we also aren’t wizards or machines; if we want to fix it, we need something practical and feasible; trust will always play a role in human systems. Solutions, that keep the status quo financing and predatory hierarchy and that are expensive, take time and effort or ignore the human element won’t work!
@falrus Жыл бұрын
In my field I have requested the original source code to verify that the graphs were indeed correct. This request was denied and our lab just refused to review the article. It does not mean, this article will not be reviewed by somebody else.
@markjoseph2801 Жыл бұрын
Add a consumer union/computer reports entity to validate the data and report fraud. In the end, the public pays the price as these universities suck up federal funds on inane and rigged studies. Crowd sourced review with big data analytics.
@guard13007 Жыл бұрын
I hate that the solution is saying a company should stick its fingers into science.
@champagne.future5248 Жыл бұрын
My takeaway is that behavioural science has some creepy ramifications in that it’s used by governments to refine their propaganda techniques
@joaoneves4150 Жыл бұрын
When it says "waiting for you" makes it seem that it won't wait forever so either I get it now or I lose my chance.
@updatingresearch Жыл бұрын
Very weary about this "hope". I am sure megastudies can be defeated by those with enough maligned motivation. Real validation is not peer review it is repeatability and repeated studies by independent researchers.
@masterdecats6418 Жыл бұрын
Money. All it takes to fck it up is money.
@vparez4363 Жыл бұрын
No, this is completely wrong. We do not need more working principles from the industry in academia, we need less! The greed which is ported from industry with stupid measures like number of citations and h index into science is what causes academia to behave as it does. If it werent for the incentive to commit wrongdoing, there would be no need for security.
@masterdecats6418 Жыл бұрын
1) Take away incentives 2) Stop copywriting data so no one can see it 3) Make the data available for scrutiny 4) Maybe have a programmer create a program that scrutinizes data statistically, and keep it away from the people trying to publish so they can’t try to “game” the software
@estern001 Жыл бұрын
What does it mean that a study "failed?" Layperson here. I understand that science is about collecting data. I was told that all data is important. Don't we learn something even when we don't get the expected result? Shouldn't we value that research just as much as data that supports a hypothesis?
@drbachimanchi Жыл бұрын
As an undergraduate I was part of data collection team...I carefully copied data from my friend with minor modifications to save time for biking...it is being cited as a ground breaking study till date
@internetmovieguy Жыл бұрын
Hot take: pier review is just academic circle jerk. If we want the field of research (for all subjects) to grow then the pier review system needs a complete overhall.
@OptimalOwl Жыл бұрын
Isn't it really weird that no one has ever done a really thorough systemic review on the efficacy of the journal & peer review system? Researchers basically donate their work to journals for free, and then the journals turn around and sell that work for exorbitant prices. That's how you get those stories about various journals clearing 30+% profit margins. I don't think it's unreasonable for society to demand some quality assurance in return for that privilege.
@TheAlison1456 Жыл бұрын
this isn't hot at all it is just obvious
@blujaebird Жыл бұрын
Peer review, not pier
@blujaebird Жыл бұрын
Also...*overhaul
@1789Bastille Жыл бұрын
it is acutally quite surprising how most scientists are clueless about data. I wish there was something like a neverending peer review as part of a everlasting metastudy.
@masterdecats6418 Жыл бұрын
Cool. Who’s gonna oh for it! Science and capitalism only mix when capitalists want it to.
@mxvega1097 Жыл бұрын
I absolutely disagree that a centralized data and oversight system is going to solve more problems than it creates. C'mon, this is game theory and institutional economics 101. When researchers come to rely on a centralized system, the inputs will fit the parameters and methods of the system, and the outputs will invariably be force-normed. Participants will not internalize the methodological and epistemic solutions and express them in better studies, they will likely do bog standard research and claim verification based on acceptance by the central system. Call it the Ministry of Scientific Accuracy approach. A better system would be more transparency, more challenge, acceptance of audit at any stage, incentivized replication, and ownership by the researcher of the integrity of the process. Integrity can't be outsourced. It can be reinforced, including through a well-designed whistleblower function, an ombudsman, etc. Sounds burdensome? Not really, if the alternative is competitive lawsuits and even lawfare. Try defending a lawsuit for years and maintaining focus, funding, and prospects. [interesting that Pete is working in internal audit - my field is mechanism design and risk management, incl in large banks]
@splatsma Жыл бұрын
I wonder if there are any attempts to critically analysis the validity of whole fields. I got a couple of years into my chosen field (international studies), to then realize its entirely dependent on opinion. Yet presents as a clinical fact-based critique. Which is far from reality.
@masterdecats6418 Жыл бұрын
Can’t go after psychology. How else could businesses falsify studies and publish them as fact to chase profits while harming people.
@jota5044 Жыл бұрын
4:30 I find it bold to assume that a bank can store the data. The original source of the data can and will most likely have a personal interest over the outcome of any study using it's data
@davidBTAS Жыл бұрын
Have had chance to watch/or are you aware of a video by KZbinr: Quant, stating that Dan Ariely may actually be a fraud?
@PeteJudo1 Жыл бұрын
I’ve seen the video. Have something in the works, can’t say too much right now.
@JEBavido Жыл бұрын
Wild to hear about the pharmacy/vaccine encouragement wording today because I just got one of those exact messages from CVS. They said MY vaccine awaited me.
@yemiojo2265 Жыл бұрын
Even if you choose to put a stamp of authenticity on papers, crooks will still devise other means to measure up to get that stamped! It is like getting the "Organic" or "Green" badge on food products.
@cipaisone Жыл бұрын
While in accademia I was, as many, frustrated by the amount of papers of dubious validity, especially those on “high impact” journals. This, together with the share amount of papers published, many of little or no relevance, I am convinced will result in a collapse of accademia in few decades at most, unless something change. I believe that it is about time that states investe in some “parallel” institutions to research center, whose aim should be not to do research, but to try to replicate available research studies, so as to check, at least, the fraction of studies becoming popular and potentially relevant ( i.e, worth preserving for the future, as it is unlikely that most of the “science” will anyway survive in the decades or even centuries to come). I think checking available research data is becoming as relevant as or in fact more relevant than making new research, and states should be supporting such activities. It would be also a way to give works to some of the many valid researchers that cannot continue the very competitive market of accademia ( where indeed the extreme competitiveness and lack of control of outcomes is I believe the main source of fraud in accademia)
@salganik Жыл бұрын
1. Science existed for hundreds of years and suddenly all of a sudden it will collapse. Sounds legit. 2. The vast majority of publications are just not getting any attention so the state should not do anything to know that such researchers are doing a bad job. 3. How a parallel institution while not having experts in most niche fields can check or replicate anything, when it comes to state-of-the-art equipment, computations, or theoretical complexity? 4. The state is hiring researchers for many reasons including producing independent thinkers who can lead research in academia or industry. And how revealing 1% of cheating researchers would significantly help the research or the state?
@cipaisone Жыл бұрын
@@salganik 1) how many people were there, doing research, 2-3 hundreds here ago, compared to the last 20-30? How many publications were produced back then, per year, compared to know? My man, things in humanity changed exponentially lately, I do not know where you have been… 2. The vast majority of publications are not getting any attention by people, but not by browsers, so that what happens when you search for a trivial spectroscopic feature today, or the composition of some industrially well-known coating, is a never-ending list of garbage. I do not know you, but I do not think that this is a useful way of managing knowledge. 3) not even clear what you mean. 4) the “state” (I do not know what state you refer to, but very broadly, for much of the states) invest very little for research, and that little spent on science is to a large extent spent in exotic “hot-topic” and cryptic nonsense, with only a small fraction leading to innovation in science or industry… I think your 1/100 estimation of unreliability in science is way lower than what the reality is (and by the way, from where you got this statistics? Or is just bs? Just curious…)…I think there is an old veritasium video on KZbin making a better estimation on how much data is wrong in publication, go check it out .
@salganik Жыл бұрын
@@cipaisone My third point is very simple: if an institution wants to replicate a fraction of studies as you suggested, it should have funding comparable to all universities and employees with similar qualification. And even this would not eliminate cheating as not all studies are based on data you can reproduce. This includes theoretical studies, usage of heavy simulations, and observations of nature. And, of course, funding an institution with a comparable budget of universities and institutes is unbearable for most countries. Norway spends around 8% of its budget on education, so a substantial fraction goes to universities, and this doesn't include governmental research institutes. And the Veritasium video was not at all about the fraction of research which is falsified, but about studies which make statements not supported by data. The fraction of retracted papers is way less than 1%, there is a number of papers about it. And there are many anonymous questionnaires when researchers were asked if they ever cheated with their results. There is a range of numbers, but on average something close to 1%.
@Planetoid52 Жыл бұрын
Great interview. It's a happier world when people of integrity are doing the research and are designing processes and systems to reduce fraud and also to incentivize studies that may not produce 'wow' results but that still contribute to mega-studies. Love your channel.
@surajsajjala2857 Жыл бұрын
Harvard is a big L.
@AhmetEfendioğlu1 Жыл бұрын
What does L stands for ?
@dengesizd Жыл бұрын
Liar?
@lisleigfried4660 Жыл бұрын
@@AhmetEfendioğlu1 L = loss
@TomJakobW Жыл бұрын
@@AhmetEfendioğlu1internet lingo. “W” means winner or win, “L” means loser or loss.
@AhmetEfendioğlu1 Жыл бұрын
@@lisleigfried4660 thx man
@GutsofEclipse Жыл бұрын
7:25 It's ironic that he's talking about doing exactly the kind of thing that's making people view academia as a left wing partisan machine that's abandoned all of its principles without any disclaimers. He didn't have any other examples?
@Armz69 Жыл бұрын
Can you do one on social desirability bias in behavioral studies and strategies to overcome that?
@haroldbridges515 Жыл бұрын
Actually, he has no basis to be sanguine about the extent of data fraud, since scrutiny of the type that exposed Gino is rare.
@AbbaKovner-gg9zp Жыл бұрын
the reaction shots of you nodding like a goon while she's talking were top notch keep it up
@parrotraiser6541 Жыл бұрын
Studies of failure may be boring and unpublishable by themselves, but they are valuable and should be seen, to avoid future mistakes. Engineers study failures for that very reason. Mega studies make that possible, by including the failed hypotheses.
@lisleigfried4660 Жыл бұрын
2:16 bro's acting like a stock footage actor
@123-ig9vf Жыл бұрын
What about funding systems? There is more harm to science in how the funding agencies operate. Why are proposals not reviewed in a double-blinded mechanism?
@benjaminkuhn2878 Жыл бұрын
okay, so you jsut wanna trough tech at the issue. Lets hope there is valid training data for the AI.
@MadsterV Жыл бұрын
Studies on how to manipulate people neat.
@TripImmigration Жыл бұрын
None of this avoid the ghost people and mega studies is only available for influential academics It's good but the measures still very naive for the reality
@gaerekxenos Жыл бұрын
Funny enough, the prompt for Vaccination of "Waiting for you" or "Reserved for you" isn't actually just 'ownership' -- it's a way of guilt tripping people. "We've gone out of our way to make a reservation for you," "this is a resource that is going to be wasted if you do not take it," etc. Another thing related to that is "We have taken the work out for you" or "we've made it easier for you to complete this task" - basically the removal of barriers to make it simpler and easier to access, which is implied if they have 'reserved' the vaccination for you as there would be an assumption that whatever complicated paperwork or coordination effort for securing it has already been done and that there would be less of a wait time to go through with the vaccination (there isn't all that much of a complicated process in the first place as far as I am aware, but the illusion that whatever might be there now isn't can be a motivator)
@Xgjigzigzyixiy Жыл бұрын
None of this has hope. This guys very biased and extremely shy to approach real and more substantial academic fraud. Your picking socially safe and easy topics lol.
@masterdecats6418 Жыл бұрын
He chose a cringe af career path. Now that the openly corrupt field is now even more openly corrupt, they have to triple down to justify their degrees and semi-wasted time.
@RemotHuman Жыл бұрын
do we want humanity to know how to manipulate humans with things like ownership language
@TheAlison1456 Жыл бұрын
what? yes
@andrewmiller3055 Жыл бұрын
First Prof. Milkman is saying better safeguards to mininize fraud are necessary (aka let's not deny the obvious, that cheating scandals require reform beyond colleagues chastising each other behind closes doors or commiserating over coffee). Professor Milkman shows a lot of poise and leadership in moving quickly to solving a huge problem while not taking a potshot at anyone eg the solution will sideline more bad actors. Unfortunately she doesn't make any waves in terms of highlighting some egregious bad actors that need to be dismissed. I'm glad Pete Judo does this for everyone - aka cutting through bad faith arguments and pointing out the field has a problem without rushing to the solution end. He's done a really good job of handling the dumpster fire affecting the field rather than avoiding it, and has even said that it's important to take a second look at references so that only behavioral science that is correctly vetted is merited. I am also glad Pete Judo squarely puts the onus on the people involved AND the incentives, rather than merely the incentives. That's the right thing to do, because at the end of the day people are still responsible for their work, no matter what that means in terms of professional consequences. Anyways, thanks for looking at all the dimensions and going where Professor Milkman can't, but also giving Professor Milkman a chance to express what will make a difference, both for better science and a better profession beyond the scandal.
@masterdecats6418 Жыл бұрын
Universities and Labs are still businesses. Ofcourse they’re going to be predatory to everyone involved.
@antsmith739 Жыл бұрын
Having questionnaire results published directly to a blockchain may help.
@charlesdarwin5185 Жыл бұрын
A raw data set has to be sealed and sequested in a repository with the IRB or equivalent before analysis is done.
@whycantiremainanonymous8091 Жыл бұрын
Sure. But what if it's already fake?
@Sheikdaddy Жыл бұрын
We have a system of academia where one can create a spreadsheet with any data and as long as it looks legit nobody's verifying that the research was done? If you want to fix every soccer game you don't need to bribe entire teams. You just need 1 goalie. You only need a couple of fabricators in a system of a lot of people to be able to fabricate anything you want. How do you resolve your faith in a data world when any data could be fudged? When there will be scandals of studies being published turning out to be chat gpt created in the future?
@garyquinn8014 Жыл бұрын
One thing which really strikes me about this whole episode is the amateurishness of it all. From the original experiment, to the simple types of data collected, to the data fakery itself, it's all so trivial and basic. This is (was?) a highly regarded professor at Harvard, earning over $1m a year, and all she can think of is an extremely simple experiment involving where to sign a document? Even the alleged fake data is so so simple, not some sophisticated exersize in subtle data manipulation - just some basic data juggling. I really worry about the future of US academia.
@niekverlaan7227 Жыл бұрын
I love this comment section! Its full of like-minded people who have critical remarks in a mostly positive way. I really adds to the video itself. Thanks all! And to add something to the discussion: I've always learn that one example is no example. You always need a few examples to understand the essence of the examples. Thats same principle might apply to studies too. You need more than one study to proof a hypothesis.
@zxdc Жыл бұрын
@1:30 which website is that?
@PeteJudo1 Жыл бұрын
Ground News! Use my link in the description for a discount :)
@Dragoon91786 Жыл бұрын
Maybe, I'm a tad absurd, but providing researchers with the means to test (as you said) "absurdly large sample sizes" seems to me to be what *_should_* be the norm. I realize why it isn't (and there are a *statistically significant number of reasons why 🤣), but when setting goals for a planet's worth of people, having larger sample sizes (when you haven't noticed the questions or something significant) can help even out all the craziness that is the human condition. There are so many variables that it seems to me absurd to have smaller sample sizes unless one is trying to figure out how one wants to model the study. Pre-studies to help improve the actual study's design. This might have beneficial effects on these so called "mega studies" by having the opportunity to control for legitimate variables that will impact the study's results in a detrimental way-by "detrimental" I mean that they skew results away from a more accurate model or description of reality. This way, while it will certainly limit the scope of a study-such as limiting it to a certain characteristics (say, people with a particular genotype), that way those specifications can be clearly stated. Basically, controlling for variables and stating those variables so that more information about some aspect of reality can be parsed. "When we controlled for ambient temperature we saw greater results than when testing during inclement weather." Say if the study in question has results from people in areas where a massive heat wave or cold front was occuring. When new data sets were tested accounting for weather you could see how weather impacted the types of text messages sent to remind people to get vaccinated. Would cold days for a given region have a greater impact on studies subjects tendency to attend their flu shot when reminded. Or, say, accounting for ADHD. What happens if the sample had an unusually high number of people with executive function impairment such that when compared to a control group and a group that had greater executive functions, how might the results differ? What could be meaningfully said about controlling for these variables, etc It would be nice to see more studies getting the opportunity to control for more variables and to have all of this data as well as the pre-study(pre-studies) be registered/verified along with the main study. The cherry picking of data as opposed to transparently controlling for variables is so notorious.
@blujaebird Жыл бұрын
Its interesting to me that this video has such low views compared to the other ones.
@falrus Жыл бұрын
Megastudies should be secure enough even against Gino type of data manipulations.
@giovannigiorgio42069 Жыл бұрын
I have an idea that uses the blockchain to validate the data used for research, however I am unsure of how viable it is as I am not very knowledgable on the inner workings of blockchain. Would a system which uploads raw data from a study or field research based on a predefined period alongside the date and time of the upload potentially reduce the likelyhood of a tampered dataset?
@bigboi1004 Жыл бұрын
Blockchains are just a worse version of the good old append-only database, which would be sufficient to implement your idea. A realistic/easy implementation is that raw data is pushed to a version control system (think GitGub), and the researcher has no permissions to rebase (meaning to alter the past). This allows researchers to modify data which can be used to anonymize it or correct mistakes, but any changes would be visible to an auditor. Auditors would see timestamps along with what data was changed and exactly how. This, however, doesn't prevent a malicious researcher from tampering with the data *before* it hits the database. That problem alone renders the idea pretty much a non-starter. It's not a problem that can be solved with software at all, and I say this as a computer science student. People get extremely clever when they're motivated, and "fading into obscurity because you aren't publishing groundbreaking research" is unfortunately a strong motivator for some to cheat. A smart enough researcher will bypass the anti-fraud mechanism, and can then claim that their data is legitimate *because* it made its way through the system (imagine someone responding to "Do you have the key to that door?" with "Well I'm inside, aren't I?") I think the problem is ultimately incentive. There are strong reasons to cheat and, as things stand, it can take years to get caught. I'll admit that I don't have a real solution in mind because the scope of the problem is too large, but I'm certain that it isn't a software solution.
@caglayanozdemir348 Жыл бұрын
Awesome work
@d3202s Жыл бұрын
Behavioral "science." Pleas.
@Dragoon91786 Жыл бұрын
Did they sort this Data for ADHDs? Cuz we'll majorly throw iff your stats if not accounted for in the "reminder" department! 😅
@JennaHartDemon Жыл бұрын
Its interesting. This is all great. With deepfakes we are going to have to have recordings cryptographically signed on the collection hardware to verify the authenticity. Its good to see all these other branches of STEM focusing on authentication of data as well.
@Veptis6 ай бұрын
clinical trails do preregistration. so you register the study and research question before you end up running the trail and publishing. But it's not enforced everywhere. Does it really defeat p-hacking? who is the blame? the journals not doing their due diligence. them abusing free labour from reviewers, editors and authors and still getting paid for it. While only providing prestige and reputation that they don't really care for. in my field of language model research the landscape is horrible: everything is preprint. But at least we don't have journals - we have conferences. in astrophysics, you often split into two or three completely disjunct teams who get the exact same data and then produce a result using different methods. And quite frequently they do get to different findings... But here data is shared with everyone because it's very much external. But there still can be various biases in selections.
@Ganntrey Жыл бұрын
It seems to me that "mega-studies" are just pre-emptive metanalyses. This is definitely good, but its not inherently new. It's just academic responsibility preempted.
@whycantiremainanonymous8091 Жыл бұрын
No. Meta-analyses cover many studies testing the same hypothesis. Mega-studies cover many hypotheses in one study. That's much more methodologically questionable.
@byronhunter6893 Жыл бұрын
idk about ownership 🤔 I'd think most people would be more attracted to hospitality for a vaccination, something a bit distant from the ironically cold mechanisms of a hospital. A bit anecdotal perhaps, but I've never known of anyone that's entirely comfortable with a vaccine "for them".
@FinnBrownc Жыл бұрын
You need git based change tracking for data. Tech has been doing this for literally decades.
@ArturEjsmont Жыл бұрын
For the behavioural science community not to push for change in incentives is surprising. Control and beaurocracy is a losing battle.
@morgengabe1 Жыл бұрын
The recurrent reproduction crisis in psychology was never a threat to academia.
@jloiben128 ай бұрын
So a mega-study is basically a super meta-analysis
@killa3x Жыл бұрын
Has he done a video on Dan ariely? That dude a straight fraud no?
@Heyu7her3 Жыл бұрын
Thank you for providing strategies to use in qualitative research!
@plugplagiate1564 Жыл бұрын
... and to comment on the megastudy topic, why is a survey of 680000 people unreliable? if they use the data of the nsa, it becomes a rather humble number.
@meneldal Жыл бұрын
@@JS-oh2dpNot to mention a bunch of studies are actually megastudies in disguise, they just remove the questions that didn't lead to any interesting results
@opheliaelesse Жыл бұрын
Who cares about millions ! of wasted, tortured animals? Few.
@sacman3001 Жыл бұрын
Just nudging ain't science
@rubberduck2078 Жыл бұрын
the "ownership language" sounds a lot like a lie
@stanleyklein524 Жыл бұрын
Katy Milkman is not a scientist (behavioral science is a conceptual oxymoron -- unless you think a discipline that violates two of the most basic criteria for X to be considered a science still merits the status of "science").
@MadocComadrin Жыл бұрын
While I have serious concerns about behavioral science programs in business schools (due to weird and misplaced incentives), any field that uses the scientific method is a science.
@luszczi Жыл бұрын
Hey it's the pretentious "professor" and his insider knowledge again. 😂 Is there any other type of oxymoron than a conceptual oxymoron? And what are those criteria you're referring to? You speak of things nobody has heard of before, educate us! 🤣
@TheAlison1456 Жыл бұрын
why do you get to decide what is science?
@masterdecats6418 Жыл бұрын
@@MadocComadrinYeah but what if that “science” is routinely bastardized by fake results. Your hypothesis and results all mean shi* if you’re going to fake it.
@stanleyklein524 Жыл бұрын
You are confusing a necessary condition with a sufficient condition.@@MadocComadrin
@stephmaccormick3195 Жыл бұрын
Didn't one of them Trumps went to Wharton? 🤣🤣
@l.w.paradis21085 ай бұрын
Non-medical people have no business finding ways to "nudge" people into any medical treatment. Nothing she mentioned has anything to do with determining whether a particular person should take a particular drug or vaccine. This reminds me of the Gorgias. 😂
@Ganntrey Жыл бұрын
I've left similar comments on every video in this series. PEER REVIEW!!!!! IF A FINDING IS REPEATABLE, THEN IT IS VALID, IF NOT, INVESGTIGATE THE ORIGINAL PUBLICATION!!!! The whole scientific method is subject to and validated by the process of peer review and repeatability.
@masterdecats6418 Жыл бұрын
Unless the PR makes these businesses $1 Million+, they won’t pay for it.
@masterdecats6418 Жыл бұрын
Imo always trust a neurologist or an endocrinologist before you believe a psychologist..
@erandeser5830 Жыл бұрын
In universities "professors" walk free, teaching that there is no difference between men and women. Go after their publications.
@lukasbormann4830 Жыл бұрын
Harvard is done I’d say
@TomJakobW Жыл бұрын
unlikely
@saraluvsyuo Жыл бұрын
it will never be lmao no one will care
@andrewmiller3055 Жыл бұрын
Ha. If I were given a dollar over the years for every time someone said that Harvard was done. Harvard's outliving all of us, our descendants and theirs too.
@brownieboiii Жыл бұрын
Penn > Harvard frfr
@MadocComadrin Жыл бұрын
I agree, but Penn (and especially Wharton) is also filled to the brim with rich kids so out of touch with they rest of us that they couldn't tell you the rough price of a banana.
@markwest19639 ай бұрын
Penn ✊
@TekilaTheKilla Жыл бұрын
Damn... the megastudy reminds me of a fundamental concept in free-market capitalism: competition. Having multiple competing forms of evaluating the same phenomema leads to a more clear and conside picture of what actually helps or not. Maybe the solution to increase competition between different researches and methods to find the ones that most accurately describe reality.
@redoktopus3047 Жыл бұрын
>wharton
@zhenyaka13 Жыл бұрын
Love it! So…. How do we know that your guest or you are lying? Isn’t what you practice a lie? Just another way to manipulate humans to do what you think they should? What happened to persuasion with truth?
@MarkMackenzievortism Жыл бұрын
en.wikipedia.org/wiki/Grievance_studies_affair
@Heyu7her3 Жыл бұрын
😮 That's about as bad as Mindy Kaling's brother's med school acceptance...
@TomJakobW Жыл бұрын
We are all just humans; if someone wants to be a “gender researcher”, it just is a reality that it attracts… well, you know which kinds of people. And those people will have strong extra-scientific influences, like politics, peer pressure/ gaining the respect of your peers , biases and so on. This inevitably will seep into the research - which is an immense problem & it’s especially transparent with these fields you referred to, which is why they are so openly criticized! Also an issue in journalism! We need to find actually viable, fair and “human” solutions to these “outer” problems that go beyond the populist (and also purely political) criticism that is prevalent in more right-wing media. Ironically, a solution is indeed “more diversity”!😅 but maybe next time diversity in thought & not being a “gender minority”… We have a lot to lose!