Has this always been an issue in academia, or is this an increasing or new phenomenon? It seems as if there is a widespread need to take shortcuts and boost your h-index. Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today?
Academic mathematics, from what I've seen, seems incredibly competitive and stressful (to be fair, so does competition math from a young age), perhaps because the only career for many mathematicians (outside a topics with applications such as but not limited to number theory, probability, and combinatorics) is academia. Does this play into what this article talks about?
I think some of this has to do with... resentment? You're this incredibly smart person, you worked really hard, and no one values you. No one wants to pay you big bucks, no one outside a tiny group knows your name even if you make important contributions to the field. Meanwhile, all the dumb people are getting ahead. It's easy to get depressed, and equally easy to decide that if life is unfair, it's OK to cheat to win.
Add to this the academic culture where, frankly, there are fewer incentives to address misbehavior and where many jobs are for life... and the nature of the field, which makes cheating is easy (as outlined in the article)... and you have an explosive mix.
The introduction of this article [1] gives an insight on the metric used in the Middle Ages. Essentially, to keep his position in a university, a researcher could win public debates by solving problems nobody else could solve. This led researchers to keep their work secret. Some researchers even got angry about having their work published, even with proper credit.
What that means is that researchers become much more risk averse, and and stay in their research area even if they believe it is not the most interesting/imapactfull. You just can't afford to not publish for several years, to e.g. investigate a novel research direction, because without the publications it becomes much much harder to secure funding in the future.
The demand for novel knowledge is always high. It is the supply that is short.
That’s why we hang around on HN hoping for something novel of true interest. You get a good find every once in a long while.
Math is particularly susceptible to this because there are few legitimate publications and citation counts are low. If you are a medical researcher you can publish fake medical papers but more easily become “high impact” on leaderboards (scaled by subject) by adding math topics to your subjects/keywords.
It really is a terrible thing, though I can understand how some researchers feel trapped in a system that gives them little if any alternative if they wish to be employed the next year. Not just one thing needs to be changed to fix it.
My take: I’ve published in well-regarded mathematical journals and the culture is definitely hard to explain to people outside of math. For example, it took more than two years to get my key graduate paper published in Foundations of Computational Mathematics, a highly regarded journal. The paper currently has over 100 citations, which (last I checked) is a couple times higher than the average citation count for the journal. In short, it’s a great, impactful work for a graduate student. But in a field like cell biology, this would be considered a pretty weak showing.
Given the long timelines and low citation counts, it’s not surprising that it’s so easy to manipulate the numbers. It is kinda ironic that mathematicians have this issue with numbers though.
The now-standard bibliometrics were not designed by statisticians :-)
I know for a fact that the number of fake-journals exploded once the Govt. of India decided to use this for promotions.
It's a bit sad really: in the classical world both these countries spent inordinate amount of time on the questions of epistemology (India esp.). Now reduced to mimicking some silly thing that vaguely tracks knowledge-production even in the best case in the West.
The problem of AI generated papers is much more serious, although not happening on the same scale (yet!).
It is what we could call the “zone of occasional poor practice”. Included are actions like
I think this is more common in computer science papers. I see this all the time, where 5- 10 authors will collaborate on a short paper, then collaborate on each other's papers in such a way that the effort is minimized and publishing count and citation count is maximized. .
mathattack•4h ago
aleatorianator•3h ago
mathattack•3h ago