top of page

The Pressure to Publish Pushes Down Quality


Scientists must publish less, says Daniel Sarewitz, or good research will be swamped by the ever-increasing volume of poor work.

I am pleased to announce that as of the middle of April, my Elsevier publications had received 30,752 page views and 2,025 citations. I got these numbers in a promotional e-mail from Elsevier, and although I’m not sure what they mean, I presume that it would be even better to have even bigger numbers.

Indeed, the widespread availability of bibliometric data from sources such as Elsevier, Google Scholar and Thomson Reuters ISI makes it easy for scientists (with their employers looking over their shoulders) to obsess about their productivity and impact, and to compare their numbers with those of other scientists.

And if more is good, then the trends for science are favourable. The number of publications continues to grow exponentially; it was already approaching two million per year by 2012. More importantly, and contrary to common mythology, most papers do get cited. Indeed, more papers, from more journals, over longer periods of time, are being cited more often. One likely reason for rising citations is the incredible search capabilities that the web now affords. This would seem to be good news.

But what if more is bad? In 1963, the physicist and historian of science Derek de Solla Price looked at growth trends in the research enterprise and saw the threat of“scientific doomsday”. The number of scientists and publications had been growing exponentially for 250 years, and Price realized that the trend was unsustainable. Within a couple of generations, he said, it would lead to a world in which “we should have two scientists for every man, woman, child, and dog in the population”. Price was also an elitist who believed that quality could not be maintained amid such growth. He showed that scientific eminence was concentrated in a very small percentage of researchers, and that the number of leading scientists would therefore grow much more slowly than the number of merely good ones, and that would yield “an even greater preponderance of manpower able to write scientific papers, but not able to write distinguished ones”.

The quality problem has reared its head in ways that Price could not have anticipated. Mainstream scientific leaders increasingly accept that large bodies of published research are unreliable. But what seems to have escaped general notice is a destructive feedback between the production of poor-quality science, the responsibility to cite previous work and the compulsion to publish.

The quality problem has been widely recognized in cancer science, in which many cell lines used for research turn out to be contaminated. For example, a breast-cancer cell line used in more than 1,000 published studies actually turned out to have been a melanoma cell line. The average biomedical research paper gets cited between 10 and 20 times in 5 years, and as many as one-third of all cell lines used in research are thought to be contaminated, so the arithmetic is easy enough to do: by one estimate, 10,000 published papers a year cite work based on contaminated cancer cell lines. Metastasis has spread to the cancer literature.

“The enterprise of science is evolving towards something different and as yet only dimly seen.”

Similar negative feedbacks occur in other areas of research. Pervasive quality problems have been exposed for rodent studies of neurological diseases, biomarkers for cancer and other diseases, and experimental psychology, amid the publication of thousands of papers.

So yes, the web makes it much more efficient to identify relevant published studies, but it also makes it that much easier to troll for supporting papers, whether or not they are any good. No wonder citation rates are going up.

That problem is likely to be worse in policy-relevant fields such as nutrition, education, epidemiology and economics, in which the science is often uncertain and the societal stakes can be high. The never-ending debates about the health effects of dietary salt, or how to structure foreign aid, or measure ecosystem services, are typical of areas in which copious peer-reviewed support can be found for whatever position one wants to take — a condition that then justifies calls for still more research.

More than 50 years ago, Price predicted that the scientific enterprise would soon have to go through a transition from exponential growth to “something radically different”, unknown and potentially threatening. Today, the interrelated problems of scientific quantity and quality are a frightening manifestation of what he foresaw. It seems extraordinarily unlikely that these problems will be resolved through the home remedies of better statistics and lab practice, as important as they may be. Rather, they would seem — and this is what Price believed — to announce that the enterprise of science is evolving towards something different and as yet only dimly seen.

Current trajectories threaten science with drowning in the noise of its own rising productivity, a future that Price described as “senility”. Avoiding this destiny will, in part, require much more selective publication. Rising quality can thus emerge from declining scientific efficiency and productivity. We can start by publishing less, and less often, whatever the promotional e-mails promise us.

 

Refereces :

Nature 533, 147 (12 May 2016). retrieved from http://www.nature.com/news/the-pressure-to-publish-pushes-down-quality-1.19887?WT.ec_id=NATURE-20160512&spMailingID=51353744&spUserID=MzY2NzI2MDI5NjYS1&spJobID=921389953&spReportId=OTIxMzg5OTUzS0

Downlaod the PDF version from this link

To read our "daily article" kindly like MyThesis Hub page on Facebook & Twitter below
bottom of page