People not used to the usual journal publication process often don't understand that review is not about gatekeeping. It's more about improving the paper by suggesting improvements. The CS conference system of having a hard accept/reject threshold is so unproductive.
@hk27803 жыл бұрын
You are right if it was 5 or 10 yesr ago. Nowadays it does not. They try to reduce the percentage and they said we keep quality of our conference or journal name.
@khurai1113 жыл бұрын
@@hk2780 I know it still is because I publish in journals in other fields. Sure selective journals will reject you, but they still have "major revision" so you can improve your paper.
@nauy3 жыл бұрын
It’s people like you who are used to the usual journal publication process that don’t understand the review process always ends up being just gate keeping. The review process, a form of curation, is supposed to improve the signal to noise ratio so that only high quality papers get to enter the gate. But there is no objective mechanism to measure paper quality, much less principled mechanisms to adjust/optimize the review process. Without these, the review process is just capricious gate keeping to keep the ‘riffraffs’ out. Survivor bias is a huge problem. Even in the face of evidence like this paper you are still defending a shitty system. The whole process is asss backwards. Publication should be open to ALL, THEN curation should be done by EVERYBODY, then optionally conference or journal to highlight the top quality ones. Quality of a paper should be measured objectively, by number of citations by other papers weighted by their quality. Authority should be measured by the number and quality of their papers . This should sound familiar... That is at least one better way.
@khurai1113 жыл бұрын
@@nauy Are we talking about removing peer review? Because I wasn't. Sorry for you missing my point. For the record, I'm accustomed to both processes. The CS conference works well when the number of papers is low. But now, it has completely overloaded the system.
@nauy3 жыл бұрын
@@khurai111 Quite contrary. If you just stop and listen to yourself and really read what I wrote above, you should figure out what the problem really is and its solution as well. The publication process is backwards in that it requires a small number of priests of knowledge to allow it to happen. This is non-scalable and non-optimal. I am proposing that publication should be done first and everybody participates in the review process which is reading the papers and if they find useful cite them in their work or contributing with comments/suggestions. Then after sometime, we automatically know which papers are important work just by aggregating citations, comments, etc. I alluded to the page rank algo used by google as an example in my comment above as an objective way of measuring quality using citations. I didn’t use this example arbitrarily. What I want to highlight is it’s contemporary YAHOO. YAHOO started out as a manually curated gateway to the web. Look where they ended up. The fact is, ‘peer review’ is not really done by peers, but a small priesthood of ‘peers’, and their decisions, influenced or incentivized by whatever biases or motives these guys have, determine the fate of publication. This places an artificial bottleneck or barrier to information spread. We need to expand the set of peers to everyone and invert the process - publish first and let everyone judge. Not only is this more scalable, it is more objective way of measuring value of the papers.
@bediosoro77863 жыл бұрын
The real problem is not about the review is hard. It is about some AI department advisors require PhD students to only publish a lot of papers in top three conferences to ensure their graduation. And that is where it can become a PhD nightmare for some students. Because in some labs the students have to find their ideas write alone and put the advisor name who does nothing and doesn't want to see his name on rejected a paper.
@CharlesWeill3 жыл бұрын
Reminds me of the Big Tech interview "antiloop", where one group of interviewers would accept a candidate whereas another would reject the same candidate. The same interviewers would probably reject one another. And the only option for the candidate is to not take the rejection personally and apply one year later.
@Darkev772 жыл бұрын
100% agree, you’re echoing our opinion
@TechVizTheDataScienceGuy3 жыл бұрын
Can’t agree more. Well put 🙌🙌
@AIology20223 жыл бұрын
Completely agree. I personally don't care about conference and journal names, what is important is content and it's impact on real world.
@hk27803 жыл бұрын
The sad thing is that many evaluation system care the names.
@junhwahur3 жыл бұрын
Among those 199/298 papers, many of them can be just borderline cases: 'not bad but not good enough to accept either' or 'fine but okay to reject'. I guess, the randomness sadly goes in those cases, and it results in the discrepancies between the two groups.
@minhphamhoang28943 жыл бұрын
I agree, I have read most of the papers which had been rejected, which can be reproducible and verified the proposed hypothesis rather than something look good.
@DeepGamingAI3 жыл бұрын
lmao that intro completely sold it
@JP-re3bc3 жыл бұрын
Peer review is a thing of the past IMHO. The interesting question is of course what could replace it. I don't know, there are various ideas around but none really satisfying. Whatever the future review criteria it should avoid subjective criteria. I've seen papers rejected with (anonymous) comments that boil down to "I don't like the idea/author/institution/paradigm hence reject it". Peer review has become quite toxic and I fully agree with Kilcher's video.
@jrrurrj3 жыл бұрын
I do not know what is the problem. The worst 60% are rejected. The top 10% are accepted. 30% of the papers are ok and determined by a coin toss. And in the long run, H-score is what wins (although Vladlen Koltun had an interesting paper on H-score not working well anymore).
@amaniarman4603 жыл бұрын
If you put it that way, it makes sense 😅
@hk27803 жыл бұрын
The sad thing is that even if it is the top 10% paper, in the real world it is not that impactful.
@jaideepsingh6623 жыл бұрын
"If I'm happy with it" is the real problem here. You can't argue with a random peer reviewer sitting on the toilet.
@realwingchun3 жыл бұрын
Thank you, Yannic, I totally agree.
@aleksandrazurawpathology3 жыл бұрын
Fantastic! Love the green screen behind you and this journal club style! Thanks a lot :)
@andytroo3 жыл бұрын
the good papers (the top 50% selected) are probably clear selects; you then have a large pool of "reasonable" papers; selection can mean something, but rejection doesn't mean as much. lets assume that all the papers selected by either committee are good enough for inclusion ; conferences only have enough space for a certain amount of presentation. The problem might be that there are more papers than slots.
@sabyasachighosh62523 жыл бұрын
Very nicely put. However, are the reviewers told that they have to accept or reject x fraction of papers? Maybe they can be free to accept as many as they want, and the overflow papers only go in the proceedings?
@samdirichlet75003 жыл бұрын
No one and I mean no one with any scientific literacy thinks peer review validates articles in any field. The peer review system has been broken for years. Peer reviewed articles serve two purposes: 1) Communicate results; and 2) serve as currency for buying promotion and grant money. The peer review system exists to prevent the currency suffers inflation. Here's a joke from the 60s about physics about peer review: There are so many papers being published that if one were stack them on each other as they are published the top of the stack would be moving faster than the speed of light. This raises a paradox because nothing can move faster than the speed of light.The paradox is resolved when we realize the top of the stack of articles is carrying no information. From the 60s.
@sinyud3 жыл бұрын
Beautiful joke
@JP-re3bc3 жыл бұрын
Hahaha brilliant!
@Wlaki893 жыл бұрын
LOL xD
@clubproject54832 жыл бұрын
You know, i think these papers get people so uptight and stressed on sounding like a pretentious pompous smarty pants. Really the language starts to make it not even make sense. Usage isnt always proper in the contextual explanatory sense.
@sfarmapietre3 жыл бұрын
One's phd advisor will always be biased. There needs to be some external neutral evaluation of one's papers. Citations are not a good metric, e.g., you can get tons of citations if you publish a new dataset or a paper review, but that in itself cannot be a reason for giving a phd. Peer review is broken, but it's unclear what can be done that is for sure better and would not slow the process significantly. I vote for: public reviews for all conferences in the style of ICLR, and a mandatory engagement from all reviewers in the rebuttal dialogue.
@YvesQuemener3 жыл бұрын
I think the core problem is that decisions maker love to have a metric to follow. Even a bad one, it makes their decisions less subjective. And it is hard to blame them: individuals are very bad at staying neutral! I have seen the criticism of the publish-or-perish model everywhere in the scientific community. If someone was to make a metric that is not half as bad as publication numbers and impact factors, I think it would be very quickly adopted. I would love to see some sort of reputation network emerge, in a kind of liquid democracy. Imagine prestigious prizes winners (who I hope are better vetted than conference papers) each being issued 1000 "reputation coins" and splitting them towards teams that they feel are doing good work, then each of these teams would split it also towards people who have done work they use in their research and so forth. Your final score is the total amount you received. Use colored coins to be able to dismiss cycles. Remove a few percent of the total at each step and/or over time.
@DouwedeJong3 жыл бұрын
Thanks for making this video
@ssssssstssssssss3 жыл бұрын
It is an old-fashioned system not created in the information age. Now I think we should apply a more "agile" approach to getting results out. Also, the number of future citations is not a good objective. It overvalues popular topics. Overvaluing popular topics will get us into local maxima and the field will stagnate.
@sammay15403 жыл бұрын
This was probably the best introduction I’ve ever seen.
@Wlaki893 жыл бұрын
You are not wrong at all... In contrary, I have PhD in nuclear fusion (physics) and left academia for applied ML/DS... One of the reasons for leaving was peer-review process and article manufacturing, i.e. quantity matters instead of quality
@timothy-ul9wp3 жыл бұрын
But doesn’t grand and funds wanted a more objective metric other then “professor thinks I’m good”? Moreover this will give professors a lot of power, which is not necessary a good thing
@pawelsubko72773 жыл бұрын
It's the other way around. Guys who control conferences and those journals are exactly just a bunch of professors and your paper is accepted or rejected based on their whim.
@cblackall213 жыл бұрын
Been there… Right on, brother!
@cedricvillani85023 жыл бұрын
Good thing peer review in private consortium’s exist without any public access or bias
@dennisestenson78203 жыл бұрын
Apparently, a peer review committee is too a small sample of the population of possible peer reviewers to represent that population without large variance.
@shivanshu62043 жыл бұрын
This is honestly so off putting that I am now completely turned off by the idea of PhD. I think I'll be better off working as a research engineer for 3 years after finishing my MS. At least with a big lab like openai supporting my work I can train a large enough model that the work becomes high quality just by the impact it causes, a la GPT-3. The paper earned a best paper award at NeurIPS last year despite GPT-3 being literally the same thing as GPT, but with 96 layers.
@SirSpinach3 жыл бұрын
As a PhD student, especially towards the tail end, you likely will have more freedom to choose your research agenda than as an research engineer at an industrial lab. But you likely won't have issues with landing a PhD later after working at a good research lab for a few years. (worked as RE for about 3 years before grad school)
@JP-re3bc3 жыл бұрын
Best guarantee to get your PhD and avoid wasting 5 years of your life: do get a smart and respected advisor having lots of connections. Thank me later.
@kasuha3 жыл бұрын
I don't think anything can be done about it. Wherever you set the acceptance criteria, there always will be gray zone of papers on the verge of being acceptable and if a paper falling into such zone will be accepted will always be random. And any tightening of the acceptance criteria is reducing both the gray zone and the acceptance zone. The best we can ask for is to minimalize amount of accepted papers that are wrong, even if it leads to rejection of some papers that are right.
@chadwick35932 жыл бұрын
Phones use true random number generators via ring oscillators. It's based on hardware that picks up random quantum noise. It's driven by the need to generate good secret keys for cryptography.
@Andrejcv987659 ай бұрын
I find many reviewers cannot properly read a text. I do not mean they cannot read English, but they cannot make sense of a literary text. With some training in text comprehension, I guess things would improve. I graduated in the sciences but we never received training in text understanding or writing; that stopped at high school
@markcarey672 жыл бұрын
I love that they called it a "confusion matrix"
@migkillerphantom2 жыл бұрын
That's just what it's called, when you're looking for how many instances a classifier got wrong for example.
@zhandanning8503 Жыл бұрын
It's interesting. I'm late to your videos, but I was recently told that peer review is not for correctness review but for a recommendation on whether the paper is published or not. Perhaps I'm coming to academia with a super negative attitude considering I just started my PhD half a year ago and maybe I am thinking too much.
@swagatochatterjee71043 жыл бұрын
Hey Yannic, during your PhD from ETH, did you have to face the same problems? How did you cope with it? I mean I am having second thoughts now about leaving my cushy FAANG job and applying for a PhD because I really want to study stuff in incredible details.
@ulamss53 жыл бұрын
Current phd student here, I'd suggest just doing that in your spare time, and publishing to arxiv or something similar. Barely anybody in academia wants to be here, they're just here until they get an industry job. It would make no sense for you to do the reverse.
@brandomiranda67033 жыл бұрын
Interesting. I’m also a current PhD. I have the reverse experience. I know more people know want to stay in academia than industry. Also, its basically impossible to publish high quality research while having a second job distracting you unless you’re Cal Neeport.
@talha_anwar3 жыл бұрын
I would say try to get some publication to measure the depth of water before going into water
@jrrurrj3 жыл бұрын
IMHO the system works. You need 2-3 papers spread out over 7-9 conference submissions. This works since you have multiple trials. I have not seen any truly smart and dedicated PhD student fail because of randomness in the peer review system - most PhDs get their title eventually. Of course, motivation can take a big hit on rejection. Pursuing a PhD will give you skills which you will not get anywhere else. For you, in the short term you will take a financial hit. On the long term, you will probably have a broader skill set and be more flexible overall, so on the long run there is a good chance that it will pay off. Just make very sure that you get a good PhD advisor who regularly publishes in top-tier conferences. And since you were already in FAANG, getting back will likely not be too hard. Especially not if you are promoted once. Btw: the FAANG interviewing system works exactly as the paper peer review system: people who are not good enough will fail the interview. A small number of people will consistently pass it (experience with math olympics/programming olympics will help). Most skilled and prepared people will pass ~30-50% of the time. Afterwards, interview scores are completely uncorrelated with future success within the company. Luck is highly underrated in this society.
@sabyasachighosh62523 жыл бұрын
Wouldn't recommend leaving a FAANG job for a PhD. If you already have a research bent of mind, critical thinking skills, phd would teach you nothing new which you can't learn on your own or practical experience at work won't teach you. Try to get into a more research oriented team within your company, instead. Or, put in the extra 3-4 hours outside of work everyday. Pick papers from your field of interest, reproduce results from it. Eventually, new ideas will start flowing. The only good reason to leave your job for a PhD will be if you definitely want an academic job later. Source: left my Silicon Valley job for a PhD :)
@DanFrederiksen3 жыл бұрын
Yeah I've long had little regard for 'peer' review. Maybe a surprise solution is to limit publication to the papers that are clear cut good. With the reasoning that meh papers are not worth the community burden to absorb. And there could be second tier publication for decent but not really impactful papers that you can rummage through for ideas.
@GrammatikFehler3 жыл бұрын
By assuming that both committees just randomly send out accept/reject with a given probability, you would still expect to see 11Papers being accepted by both groups due to chance. Meaning that in the end only 22-11=11Papers were deemed to be on such a high level that they had to turn off the random number generator for them.
@bwan033 жыл бұрын
No most of the submissions fall into the category of straight fails no matter which reviewer group you assign it to. It's not random for the most of the papers.
@clubproject54832 жыл бұрын
I love playing with data, cryptography makes everthing ,everyday a puzzle to put together. When you see the glint of light ... every moment is then uncharted, unplotted that you stop having to put together pieces of your past. And face front cuz now the word story played out, and everybody is saying "yea, its new to me to..."
@lucfitt8 ай бұрын
„Catch errors and do quality control“ it does tho, there are many papers everyone agreed should not get in -> the definition of quality control. It doesn’t guarantee all papers which should do get in, it just keeps the 60% trash out.
@simonhaddow50523 жыл бұрын
Cool rant!
@TimScarfe3 жыл бұрын
Amazing video, nice one 😎😎
@BboyDschafar3 жыл бұрын
The scientific community should decide whether or not a certain science is useful or not. We don't need no gatekeepers.
@martinschulze53993 жыл бұрын
Wrong. Science should never be a democratic voting system. That leads to herd thinking and valuable New findings outlawed because gatekeepers dont Mike it for ego reasons or lacking ubderstanding
@paulkirkland98603 жыл бұрын
This is for conference publications though, how about journals though, they appear to have a slightly better (if only because it's longer) review process. I'd be interested to see how a big name IEEE journal would fare in this. Where I totally agree with you is the impact factor chasing Vs actually getting citations. As the less publications you publish the artificially higher an impact factor can be. Citations and invitation to keynote would be a better metric
@004307ec3 жыл бұрын
I do not think journals are different. I met at least 1 irresponsible and irrational reviewers in every one of my 5 different submissions.
@paulkirkland98603 жыл бұрын
@@004307ec yeah I mean reviewer 2 will always exist. Journals you'd hope are better but I still see mass emails for people wanting to review articles that give me the fear. I typically try to publish where I think I'll get traction and citations.. Especially since I'm Neuromorphic Engineering both the DL and Neuroscience community don't like our work.
@vadim0x603 жыл бұрын
So... mutual information between these 2 committies is 1.06 (it would be zero if their decisions were completely uncorelated and 2 if they always agreed). I am interpreting that (please correct me if I shouldn't) as a 1:1 signal to noise ratio which... could be worse? Anyhow this is much better than random
@odysseashlap3 жыл бұрын
My own suggestion would be to actually pay the reviewers and also meta-reviewing the reviewers. I haven't thought about how we could review the reviewers in an unbiased way, but still I think it's unfair to ask from someone to do hard work, because reviewing is hard, and spend valuable time without getting paid for it while others get rich from exactly this work. This is awful and non democratic. Even we mere students have to pay a small fortune to attend a big event but the big companies that get advertised? Who do they pay?
@MubashirullahD3 жыл бұрын
What an intro !
@MrMIB9833 жыл бұрын
Great video
@automatescellulaires85432 жыл бұрын
Man what's next ? Ask big game studio to build fun games, and Disney to hire good writers ? This is the real world mate.
@swagatochatterjee71043 жыл бұрын
The reviewers can be used as an oracle for solving the Halting Problem. Lol 😂😂😂😂
@marwaeldiwiny3 жыл бұрын
I completely agree with you Yanick! You words were on spot, I think it comes down to hypocrisy and I hope your words are being heard
@swagatochatterjee71043 жыл бұрын
How the fuck is impact factor relevant for funding, or tenure, or getting a PhD. Jeez like think of developing beautiful techniques like Dropout only to be denied a PhD/Tenure/Funding because "your paper isn't impactful enough" because it didn't get published in NeurIPS or CVPR.
@robindebreuil3 жыл бұрын
In that case, like in all cases, dropout is a reasonable option ;)
@AndreyKurenkov3 жыл бұрын
Generally fair wrt Neurips, but I will say I doubt this is the case in smaller conferences (CoRL, RSS). I've received fantastic reviews from some conferences. Yes, peer review in AI is severely flawed, but on the whole I still think it's better than nothing - people post to arxiv anyway.
@finlayl25053 жыл бұрын
Got me with that beginning gag 😭
@PotatoKaboom3 жыл бұрын
well done
@LucasDimoveo3 жыл бұрын
I wonder if this process could be partially automated
@soumyasarkar41003 жыл бұрын
What about journal publications ?
@BillyViBritannia Жыл бұрын
I dont understand what the fuss is about. Depending on whether you want to prioritize bad ones getting rejected or good ones getting accepted you get different results. You cant have both. Assuming we want to keep the bad ones out of academia so to not confuse and derail future research, then a 50% chance to identify a good paper is perfectly fine. You can increase that number by also increasing the chance of accepting garbage papers. I dont think we should do that. If you really believe your paper is good and got unlucky just submit again next year.
@ChocolateMilkCultLeader3 жыл бұрын
Jesus bro that into was vicious
@Andrejcv987659 ай бұрын
I agree that ideally one should assess people better but the system is so inflated with PhDs and people who want to go in academica that a random solution scales better . At least all the rethoric about great achievements, great papers and reviews could be spared to us
@andrewowens56533 жыл бұрын
Yeah!
@nizarouarti13123 жыл бұрын
What is wrong about bypassing peer review. I mean, it was the way science was spread during centuries. Someone decided to write something on a papyrus or paper and after some years their community decided if the work was worth it or not. I think many ideas and papers are wasted because of peer review and all the biases attached to it. I thank arxiv to provide a way to use the former system that can promote original ideas.
@AV-mb8lv3 жыл бұрын
Hey what about developing an AI reviewer instead? :)
@talha_anwar3 жыл бұрын
Only solution is to increase the number of reviewers this will reduce randomness
@MegaNightdude5 ай бұрын
Put out as many papers that are barely not crap and throw them at the random generator 😂😂😂
@usr6043 ай бұрын
Peer review in AI is complete BS.
@MegaNightdude5 ай бұрын
Yannick😂😂😂😂. Ooh, ooh, not enough experiments.😂😂😂
@al8-.W3 жыл бұрын
I want to get my PhD from a research DAO. Who wants to be my advisor (and co-founder of the DAO, I guess 😂)?
@wafflescripter90513 жыл бұрын
I have a solution. Send me all the papers, and I will use machine learning AI to rate them all. I 100% guarantee this process will be completely accurate, and will ofc not be sharing my algorithm as it is intellectual property.
@q44444q3 жыл бұрын
I think the path forward is clear. We need to limit submissions by large corporations to these conferences, so that independent researchers can have a chance, and we need to only accept the papers which are accepted by both committees. There are too many papers at the big conferences anyway. And if this makes it hard to get PhDs, so be it. It *should* be very hard to get a PhD--this is a good thing. Nowadays anyone can get a PhD, and that's bad for society. Most of these people should just be getting a master's degree. Their research can go on arxiv where it belongs, and we can all finally process the papers that get into neurips every year because there's many fewer. And then to fix the problem of some papers just not getting in, every year we can ask reviewers to vote on, say, ten papers (which have not previously been accepted at neurips) from 5 years ago or older, to honor at neurips as an "honorary accept". This way, these papers join the ranks of other esteemed papers if they actually make an impact in society over the past 5 years.
@fyodorminakov60923 жыл бұрын
Really funny.
@xinyuyang34513 жыл бұрын
The review is subjective.
@TheThunderSpirit3 жыл бұрын
always happens, noob peers and noob committee
@sapito1693 жыл бұрын
mmm i start to belive the conspiracy theory that it is done by design in order to benefit big tech reserch
@ionmosnoi3 жыл бұрын
it is not a lottery, if you are good-you will get accepted, if you are bad - you will get rejected, if you are in the middle - give up or level up, mediocrity does not help anyone
@mariomariovitiviti2 жыл бұрын
Peer review (clap clap)
@talha_anwar3 жыл бұрын
Look at citation. What at joke. I know people who manipulate it a lot
@thntk3 жыл бұрын
Although I agree that the current peer review process has problems, I totally don't agree with your proposal to equate paper's value with online popularity. Science is not a reality TV show.
@shadmanrohan69323 жыл бұрын
Peer review may have its problems but it's the best we have. There is no better alternative.
@nauy3 жыл бұрын
Nonsense! Yannick proposed a solution a while back. It makes much more sense than the current peer review process.
@khurai1113 жыл бұрын
"conference peer review" is garbage. Not peer review altogether.
@nauy3 жыл бұрын
@@khurai111 No, all peer reviews are only as good as the reviewers and the process. I’d say they all suck because there is no control over the reviewers and process by the community. We just get to see this normally nontransparent process with N.IPS in this paper. Why not do away with it and open the publication process up to the public and track and assign credit via citations, as Yannick suggested? Citations are the true measure of the value of the ideas - google uses this idea for their search engine and look where it got them. Select the top cited and celebrate them at the conferences. This should be done for all scientific publications and conferences. Duck the gate keepers.
@visuality25413 жыл бұрын
@@nauy i think that alone also has a problem since it will be biased toward the affiliations of the authors and their names. also, the popularity doesnt fully determine the true potential impact of a work. personally, i think what the community needs to focus is the integrity of a work and reviewers
@nauy3 жыл бұрын
@@visuality2541 If used by the most people doesn’t define value, what does? Hate to tell you. Even science is popularity driven. The important thing is the efficiency of surfacing things that one finds useful. You are welcome to propose something that works better. In fact, that’s point. The more people are involved the better. Let the good idea bubble to the top organically. With objective metrics, you can always come up with strategies to deal with popularity if it is indeed undesirable.
@dinoscheidt3 жыл бұрын
😅 haha what a pun setup
3 жыл бұрын
ahahah
@tshev3 жыл бұрын
Why would we bother about an average paper? Why is it important to be consistent in that area? Because somebody needs a PHD, and taking a PHD starts being a random process? Failure is good. There are too many PhDs in the world, and it makes a negative impact on science. Failed a PhD? It is not the end of the world, but it is a broken dream of a single mediocre researcher. You can take a job in the industry and have a good life.
@someonespotatohmm95133 жыл бұрын
But then why keep up the sharade by publishing 2/3 mediocer papers? Just publish the 1/3 most ppl agree is good and use another system to distinguish the rest. The current system can't seperate the quality of "mediocre" papers, and those can also contain usefull knowledge.
@NavinF3 жыл бұрын
You gotta watch the first bit of his learning rate grafting video where he reviewed a reviewer. Sometimes the reviewer just doesn’t understand the paper as well as a random person in the field would and that’s what makes the paper’s score random.
@pw72253 жыл бұрын
Let me review your papers.
@theupsider3 жыл бұрын
There are accepted papers from Nvidia which just summarizes other papers. How is that a contribution worth noting? There is no addition to the field, in fact, anyone who studies the field could write that summary. What we need is a committee which filters valuable information from useless. Classify papers in the sense of: new contribution, optimization, interesting results, minor attribution, and the rest. Or think about any sort of categorization which helps people interested in knowledge.
@tshev3 жыл бұрын
@@someonespotatohmm9513 people don't know the future. Also, I don't know how to measure short-term and mid-term success, but that is obvious for long-term evaluation. The number of citations is a wrong approach. I have a private list of top mathematicians of all time, and this metric does not apply to them. Researchers should write letters to each other discussing the work in progress, and Peer Review should be a final step. But we have to be very careful when we choose whom we trust.