It turns out that if you accept papers based on a fix percentage of submission number, increasing rates of acceptance reduces the pool of unaccepted paper and this larger percentage of the smaller queue ends up giving about the same number of accepted papers overall.
I also have this funnel simulation https://i.postimg.cc/gz88S2hY/funnel2.gif
+ Same number of new produced papers per time unit.
+ Different acceptance rates.
+ But... *same number of accepted papers* on equilibrium! With lower rates you just review more.
suddenlybananas•1h ago
empiko•1h ago
There are just too many negative eventualities reinforcing each other in different ways.
Al-Khwarizmi•45m ago
Revise and resubmit is evil. It gives the reviewers a lot of power over papers that ends up being used for coertion, sometimes subtle, sometimes quite overt. In most papers I have submitted to journals (and I'm talking prestigious journals, not MDPI or the likes), I have been pressured to cite specific papers that didn't make sense to cite, very likely from the reviewers themselves. And one ends up doing it, because not doing it can result in rejection and losing many months (the journal process is also slower), maybe the paper even becoming obsolete along the way. Of course, the "revise and resubmit" process can also be used to pressure authors into changing papers in subtler ways (to not question a given theory, etc.)
The slowness of the process also means that if you're unlucky with the reviewers, you lose much more time. There is a fact that we should all accept: the reviewing process always carries a huge random factor due to subjectivity. And being able to "reroll" reviewers is actually a good thing. It means that a paper that a good proportion of the community values highly will eventually get in, as opposed to being doomed because the initial very small sample (n=3) is from a rejecting minority.
Finally, in my experience reviewing quality is the other way around... there is a small minority of journals with good review quality but the majority (including prestigious ones) it's a crapshoot, not to mention when the editor desk rejects for highly subjective reasons. In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals, and the process is more serious with rejects always being motivated.
suddenlybananas•22m ago
However, I think this notion of a paper becoming "obsolete" if it isn't published fast enough speaks to the deeper problems in ML publishing; it's fundamentally about publicizing and explaining a cool technique rather than necessarily reaching some kind of scientific understanding.
>In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals
I got to say, my experience is very different. I come from linguistics and submit to both *ACL as well as linguistics/cognition journals and I think journals are generally better. One of my reviews for ACL was essentially "Looks great, learnt a lot!" (I'm paraphrasing but it was about 3 sentences long, I'm happy for a positive review but it was hardly high quality).
Even in *ACL I find TACL better than what I've gotten for the ACL conferences. I just find with a slow review process a reviewer can actually evaluate claims more closely rather than review in a pretty impressionistic way.
That being said, there are plenty of journals with awful reviewing and editorial boards (cough, cough Nature).
xarope•38m ago
suddenlybananas•21m ago