Funnily enough, the first “professional” coding I ever did was writing up a Stroop test in Visual Basic for a neuro professor, and I recall the effect being undeniably clear. At a personal anecdotal level, I would time myself with matching colors versus non-matching, and even with practice I could not bring my non-matching times down to my matching times.
Recycling. Some papers seemed to be near duplicates of prior work by the same academic, with minor modification.
Faddishnes. Papers featuring the latest buzz technologies regardless of whether they were appropriate.
Questionable authorship. Some senior academics would get their name included on publications regardless of whether they had been actively engaged with that project. I saw a few academics get involved in risky and potentially interesting subjects, but they all risked their careers in doing so.
But most of all, there was a dearth of true innovation. The university noticed this and established an Innovation Centre. It quickly became full of second hand projects all frustratingly similar to projects in the US from a few years ago.
Of course there were exceptions, and learning from them was a genuine growth experience fir which I am grateful.
Funding agencies can't evaluate the research itself, so they look at numbers, metrics, impact factors, citations, h-index, publication count etc. They can't simply say "we pay this academic whether he publishes or not because we trust he is still deep in important work when he is not at a work stage to publish" because people will suspect fraud and nepotism and bias, and often the funding is taxpayer money. Not that the metrics prevent that of course. But it seems that way. So metrics it is, so gaming the metrics via Goodhart's law it is.
I don't think it's super bad, but it increases administrative work and busywork overhead on top of the actual research. The progress slows somewhat per person, as the same work has to be salami sliced and marketed in chunks, but there's also way more people in it, but of course most of them produce vary low quality stuff but it's not a big loss because these people would not even have published anything some decades ago, they would just have some teaching professorship and publish every few years perhaps just in their national language. It increases the noise but there are ways to find the signal among it, and academics figure out ways to cut through the noise. It's not great, not super easy, and it pushes a lot of people out who dislike the grind but there are plenty who see it as a relatively good deal to move to a richer country and do this.
Case in point, everybody is doing AI research nowadays and NIPS has like 15k submitted papers. But the innovation rate in AI is actually not that much higher than 10 years ago, I would even argue that it is lower. What are all these papers for? They help people build their careers as proofs of work.
A typical approach to science is finding your niche and becoming a person known for that thing. You pick something you are interested in, something you are good at, something underexplored, and something close enough to what other people are doing that they can appreciate your work. Then you work on that topic for a number of years and see where you end up in. But you can't do that in AI, because the field is overcrowded.
It's crazy, most Master's students applying for a PhD position already come with multiple top conference papers, which a few years ago would get you like 2/3 of the way to the PhD, and now it just gets you a foot in the door in applying to start a PhD. And then already Bachelor students are expected to publish to get a good spot in a lab to do their Master thesis or internship. And NeurIPS has a track for high school students to write papers, which - I assume - will boost their applications to start university. This type of hustle has been common in many East Asian countries and is getting globalized.
Exactly. It used to be that way in AI a decade ago. Different subfields used bespoke methods you could specialize in and could take a fairly undisturbed 3-5 years to work on it without constant worries of being scooped and therefore having to rush to publish something half baked to plant flags. Nowadays methods are converging, it's comparatively less useful to be an expert in some narrow application area, since the standard ML methods work quite well for such a broad range of uses (see the bitter lesson). This also means that a broader range of publications are relevant to everyone, you're supposed to be aware of the NLP frontier even if you are a vision researcher etc., you should know about RL developments etc. Due to more streamlined github and huggingface releases, research results are also more available for others to build on, so publishing an incremental iteration on top of a popular method is much easier today than 15 years ago when you first had to implement the paper yourself and needed expertise to avoid traps not mentioned in any paper and is assumed common knowledge.
It may not be a big problem for overall progress, but it makes people much more anxious. I see it on PhD students, many are quite scared of opening arxiv and academic social media, fearing that someone was faster and scooped them.
Lots of labs are working on very similar things, and the labs are less focused on narrow areas, everyone tries to claim broad areas. Meanwhile people have less and less energy to peer review this flood of papers and there's less incentive to do a good job there instead of working on the next paper.
This definitely can't go on forever and there will be a massive reality check in academia (of AI/ML).
I agree that many fields essentially have papers as "proof of work", but not all fields are like that. When I worked as a mechanical engineer, publication was "the icing on the cake" and not "the cake itself". It was a nice capstone you do after you have have completed a project, interacted with your customers, built a prototype, filed a patent application, etc. The "proof of work" was the product, basically, and you can build your career by making good products.
Now that I am working as a scientist, I see that many scientists have a different view of what their "product" is. I have always focused on the product being the science itself --- the theories I develop, the experiments and simulations I conduct, etc. But for many scientists, the product is the papers, because that it what people use to evaluate your career. It does not have to be this way, but we would have to shift towards a better definition of what it means to be a productive scientist.
Perhaps one expects overgeneralization in consulting blogs though
birn559•17h ago
It's also better than any alternatives, as far as I know. Haven't heard people pushing the idea of restructuring the process, the only exception being that journals shouldn't cost (that much) money and instead institutions should pay for publishing a paper. This wouldn't however change the foundation of the process.
agumonkey•15h ago
Daub•15h ago
friendzis•14h ago
And you are back at square one: peer reviews become the currency used in academic politics. A relatively small group of tenured academics have all the incentives to independently form a fiefdom. Anonymization does not help as everyone knows work and papers of the rest anyway.
ancillary•10h ago