Should you only be allowed to publish five papers before tenure?
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)
MORE PUBLICATIONS ?= LOWER QUALITY
There’s an article out in Science titled The Pressure to Publish Pushes Down Quality. The author’s point is that as the volume of papers goes up, the good ones get lost in the bad ones.
There are other downsides as well. People waste a lot of time funding, writing, reviewing, editing, and reading lots of shoddy papers. The number of papers needed to get tenure keeps going up, which puts a lot of pressure on early-career academics. Lastly, when people have an incentive to publish a lot, they’re not encouraged to thoroughly review what’s come before. If they do, they may learn their idea is not all that new. This can prevent cumulative progress being made in science. As an aside, it’s hard to blame authors for not thoroughly reviewing the literature when so much junk has been published in the last decades.
The annoying thing about the Science article (and this blog post thus far) is that while many agree that increasing pressure to publish is bad, and that there are too many bad papers out there, there aren’t a lot proposed solutions. What should we do about it?
A friend of ours had a suggestion: Only let people publish five papers before tenure.
To be clear, the idea is not to only let people submit five papers for consideration for tenure. The idea is to only let people publish five papers before tenure. If you do, it hurts your case.
Some people hate this idea. But some love it. We can’t help but notice some good sides. It could cause people put more care into the papers they submit. It could reduce the number of papers being submitted for review. It could take the pressure off assistant professors to produce an ever-increasing number of papers before tenure. It could even reduce false alarms, failures to cite past literature, and possibly scientific fraud.
What do you think?
A simplistic solution. In philosophy, people are often promoted with two or three papers, or one book. In medicine, single study papers with an author list as long as the article are traditional. These traditions (especially the first) have justifications within each field.
I think that one think that would really help is to stop evaluating authors by the number of citations of OTHER articles in the journals that publish their papers. This criterion encourages authors to go down the pecking order of journals, leading to multiple rejections, wasted time of reviewers, false alarms in which poor papers do get into journals that were once prestigious, and delays in the publication of some very interesting work. Promotion committees should read the papers, and read what other people have said about them in print, if anything. Google Scholar helps.
May 18, 2016 @ 1:26 pm
Would you still have a clock? Because then it might get tricky. As a rookie, I’d want to publish the first 3-4 papers as quickly as possible and wait to see if I could hit a home run with the fifth paper. If not, I’d just try and get the fifth paper in anyway so I could at least have a shot, and move with tenure somewhere else if necessary.
It seems excessive to attribute the large number of low-quality papers to pre-tenure researchers. There are many more post-tenure researchers out there, several of whom may not have raised their game after the initial outpouring of junk that brought home the bacon.
May 18, 2016 @ 5:58 pm
What’s our evidence? What’s a good tenure decision? What are the predictors? Number of papers, number of citations, number of coauthors, impact factor… These are all cues, but how valid are they? I’d go for 1 multi-experiments paper (Dan, you know less is more!), single authored or co-authored with students/RAs. This would make for much more interesting discussions among hiring committee members who would have to read the paper and would give young researchers the luxury to focus on ideas rather than metrics…
May 19, 2016 @ 9:48 am
It’s a nice idea, but this seems awfully hard to implement. Part of the problem is that departments and schools have idiosyncratic tastes over the tenure decision. Any change in standards and norms must be based on a concerted effort from a unified group, presumably beginning with high-status institutions. Another problem is that tenure takes time, and any effort to “change the rules” seems unfair to those who are in the middle of the process, and may have behaved differently if different incentives were in place. It *seems* that a simpler coordination solution would be to get each field’s top journals and to choose to decrease the number of articles published per year, bringing acceptance rates down, and thereby raising the bar. This would have an immediate and equal effect on everyone in a field. If you want to raise the publication bar, why not just raise the bar?
May 19, 2016 @ 11:55 am
Dear Editor,
Using a hatchet (arbitrary limit imposed on number of papers published) is a dismaying irrational solution of this problem. How ’bout backing up and change the standards for selecting reviewers of submitted research. Shoddy research remains shoddy even if it is “approved” by poorly qualified reviewers. This would seem to be a much more intelligent standard than plucking some arbitrary quantity out of an overcast sky.
Sincerely,
Maury Siskel
May 19, 2016 @ 2:41 pm
I agree with Jonathan Baron’s comments. Counting papers or evaluating them by the citation impact factors of the journals in which they appear seems crazy.
My friendly amendment to Dan Goldstein’s proposal would be to say that for a tenure decision we will only evaluate 5 of your papers (or better yet, three) — and then carefully read and discuss those papers.
In general, third tier schools count, second tier schools count “As” and argue about what is an “A”, and first tier schools read the papers carefully and consider what important ideas and findings have been introduced.
May 19, 2016 @ 5:53 pm
Just to be clear, it’s not Dan Goldstein’s idea, but his friend’s idea.
May 19, 2016 @ 6:35 pm