Michael LaCour had a successful career ahead of him, it seemed. He received a PhD in political science from UCLA in spring this year. Prestigious scientific journals published his papers. His CV lists a dozen awards and fellowships. Conference presentations, teaching experience, coverage in The Economist: it is all there.
The study he co-authored with a Columbia researcher, Donald Green, was dubbed one of the most important political science studies in 2014. It found that gay canvassers could change people’s attitudes on gay marriage through door-to-door campaigns, which contradicted findings of previous studies. All the major newspapers in the USA covered it. Meanwhile, LaCour announced on his Facebook profile that he received a professorship offer from Princeton.
Then, a turnover. Graduate student David Broockman wanted to conduct his own research based on LaCour’s study. In the process, Broockman encountered peculiarities that he couldn’t explain. "Some small part of my head thought, 'I wonder if it was fake,'" Broockman told the New York Magazine. The evidence pointed to a single conclusion: LaCour was lying. And now, he is being accused of misconduct in his previous studies as well.
It is hard not to conclude that research misconduct has gotten worse, a recent New York Times editorial stated. Indeed, in the last five years, numerous high-profile cases of scientific fraud occupied headlines of newspapers around the world. The problem stretches across borders, from biology to sociology, from the Netherlands to Japan. False studies provide a false basis for making important decisions, such as how to cure illnesses. They reduce the trust in scientists and hinder the functioning of the wheel of science. But why do scientists cheat?
The U.S. National Science Foundation recognizes three types of research misconduct: fabrication, falsification and plagiarism. Fabrication roughly refers to making up the data, falsification to misrepresenting it, while plagiarism is characterized by presenting thoughts of others as one’s own. In order to classify certain practices in research as misconduct, they have to be committed intentionally.
A 2009 study has shown that 2% of scientists admitted to have engaged in serious forms of misconduct at least once. The author, Daniele Fanelli from the University of Edinburgh, compared 21 results of surveys of scientists from various countries and disciplines in the period from 1987 to 2008. Almost one third of over 11,000 surveyed scientists said that they had engaged in other questionable research practices. When asked the same questions about their colleagues, the numbers increased. Namely, serious misconduct was reported by 14% of researchers, while other questionable practices accounted for 72%.
Five years ago, a widely-publicized case of fraud rattled Danish academia. The story resembles that of LaCour: then-professor at the University of Copenhagen, Milena Penkowa won numerous awards and grants and received praise for her work in neuroscience from the state authorities. Then the evidence of fraud in her studies emerged after students had tried to replicate them. Danish journalist Poul Pilgaard was the first who reported on this issue for Weekendavisen. His reports helped raise awareness about it, and he has seen changes since. "The University of Copenhagen has now created a special course for PhD students regarding good scientific practice. For fun, the students call it the Penkowa course, because it was established after the Penkowa case," Pilgaard says. "It was decided by the University that this course should be obligatory for research education, so all PhD students have to attend it now," he explains. "They simply learn what is good scientific practice, how to conduct science in a proper way, and learn about scientific misconduct."
But the problem does not lie only in lack of awareness about the ethical guidelines. "In Penkowa's case, it was clearly a lie," Pilgaard points out. She obviously knew what she was doing. And so did Michael LaCour. There was an intention behind their actions. LaCour admitted to lying in his study. Eventually, Science, the journal that published his study, retracted it.
Retractions of studies from scientific journals due to misconduct are apparently on the rise. Since 1975, there has been a tenfold increase in the percentage of retractions due to suspected misconduct in bio-medical research, a 2012 study published in the U.S. National Academy of Sciences has shown. One of the authors, doctor Grant Steen, used the same database to investigate motivation for misconduct from 2000 to 2010. The results of his study, published in the Journal of Medical Ethics, indicated that the authors of papers that were retracted due to misconduct engaged in deliberate attempts to deceive.
Some types of bad scientific practice entail "soft" data manipulation, says Lise Wongensen Bach, vice dean of the Aarhus University Faculty of Health Sciences from Denmark. "There is a range where you go from misconduct to what is called 'grey area', where you just use sloppy methods, do not repeat your experiments and so on. And that’s probably the most difficult thing to identify," Bach notes. In her opinion, in spite of being present in the media, extreme cases like Penkowa’s are rare. "Very few people will do what she did, but many will, you know, cut corners – do wrong statistical methods, not replicate their experiments, not reference all the people, not look through the whole literature to see what has been going on. And that can be as harmful for science and its trustworthiness as what Milena did," says Bach.
Publish or perish
"Cutting corners" in research that Bach talks about is often associated with the imperative to publish as many articles as possible in renowned scientific journals. "Publish or perish" situation is indicative of competitiveness among researchers. This issue has been a reason for growing concerns among academics about the quality of research for years. "The funding of the University is in some part based on the number of publications that the University publishes in the right journals," Bach says. "It is an issue, because when we are looking at the applications for different positions, we are always counting the number of publications, we never look into the quality." In some countries, such as China, equating academic excellence and the number of published articles led to "an industry of plagiarism, invented research and fake journals," an Economist article argued two years ago.
One Dutch survey showed that more than half of medical professors thought that the pressure to publish "has become too excessive," and nearly 40% said that it "affects the credibility of medical research." The study, published in PLoS One in 2013, also found that one quarter of surveyed academics suffered from burn out, which was directly correlated with the publication pressure they reported.
Meanwhile, the amount of money and grants for researchers have been declining relative to the number of researchers worldwide. The same applies to vacant professorship positions. "As a post-doc you want to be an associate professor, and as an associate professor you want to be a professor and so on," Lise Wongensen Bach observes. "And there are very few of these positions at the university level. So in that way there is pressure."
At the other side of the "publish or perish" imperative are the publishers themselves. "These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research," a Nobel Prize winning biologist Randy Schekman wrote in his Guardian column.
Professor Daniele Fanelli studied the trends in academic publishing between 1990 and 2007. After analyzing a random sample of 4,600 papers from all scientific disciplines, he found that there was a 22% increase in publishing the studies which found that the initial hypothesis was confirmed. Chances of finding a study reporting a confirmed hypothesis increased by six per cent each year. "There are a lot of negative results that can be turned into positive results by not using not the proper statistical methods, and then it is easier to get it published than if you didn’t get the positive results," says Bach.
Michael LaCour published his positive-outcome study that occupied headlines of all the major newspapers in the country, including the Wall Street Journal, the Washington Post, the New York Times, the Los Angeles Times, and the USA Today, to name a few. But if the results were negative, the probability of the Science publishing it would have been lower - let alone major newspapers. Dr. Ivan Oransky, co-founder of a specialized blog Retraction Watch, commented on LaCour’s case for the New York Times. "You don’t get a faculty position at Princeton by publishing something in the Journal Nobody-Ever-Heard-Of," Oransky said. He referred to the same issues that Lise Wongensen Bach observed at the University of Aarhus. "They don’t care how well you taught," Oransky remarked. "They don’t care about your peer reviews. They don’t care about your collegiality. They care about how many papers you publish in major journals."
Competitive pressure on the rise
Even though competition pressures seem to have increased worldwide, there are differences between countries. "If you work in the U.S. for example, in many cases, even if you have a position, you have to cover your own salary with funding," Bach says. "In Denmark you can be an associate professor paid by the University."
National differences occur on several levels connected to the problem of over-competitiveness and scientific misconduct. Michael Grieneisen and Minghua Zhang published a comprehensive study of retracted academic papers published in PLoS ONE in 2012. They found that China, India and South Korea accounted for more retractions than the USA, EU and Japan between 1980 and 2010. In Asian countries, and especially China, there was a sharp rise in retracted articles from 2005 onward. From that year, proportion of retracted articles from the EU was constantly lower than the value for the United States, and the value for the United States was constantly lower than that of China.
Daniele Fanelli found equivalent trends in his study of the results of published papers between 1990 and 2007. His analysis shows that papers from Asia, including China, were more likely to report positive outcomes than those from the U.S., which, in turn, were more likely to be positive than those from Europe. Fanelli wrote that one of the possible explanations for differences between the U.S. and Europe might be higher pressure to publish in the U.S. That explanation echoes what Lise Wongensen Bach said regarding the differences between Denmark and the United States.
But scientists around the globe experience pressure before entering the academia. "In today’s China, educational competition is becoming increasingly fierce, and a competitive mentality is on the rise," a Peking University professor Jiang Kai wrote in 2012. He argued that other countries in Asia faced similar problems. "Japan’s 'examination hell,' South Korea’s 'education fever,' and the private tutoring prevailing in Hong Kong and Taiwan all reflect the intensity of educational competition in these societies." However, in Kai’s opinion, Chinese students experienced more pressure in secondary schools than their counterparts in neighboring countries, or the United States. "This kind of competition is excessive and moving toward the extreme. Competition, once a normal phenomenon in the field of education, has become severely distorted," Kai wrote.
A study by Shengming Tang, published in The Journal of Psychology in 1999, offered some basis for Kai’s claims. Tang investigated whether college students in the U.S. and China rely more on competition as a success strategy, or cooperation. The results indicated that Chinese students relied on competition to a significantly greater extent than their U.S. counterparts.
However, the U.S. has experienced increased competition as well. Six years ago, American economists John Bound, Brad Hershbein, and Bridget Terry Long summarized trends from different surveys and concluded that competition among high school students had dramatically increased in the several previous decades. They explained that the colleges had become more selective, while the demand for college education had increased.
Europe does not seem to have as high levels of competition. An international group of education and health researchers studied perceived high school pressure in Europe and North America from 1994 to 2010. Their study, which was recently published in the European Journal of Public Health, found that pressure was higher in the United States than in Europe, with “Germanic” and “Scandinavian” countries having the lowest levels.
What to do ?
In sum, these studies suggest that competition among high school and college students is higher than before in both the U.S. and China, but in China it seems to be higher than in the U.S. Other Asian countries, such as South Korea and Japan, appear to rank high in terms of competition too. On the other hand, European countries are characterized by lower competition than the U.S. and thus also Asian countries such as China, Japan and South Korea. A parallel ranking trend can be observed when we look at the proportion of retracted articles, as well as the proportion of positive results. In both cases we can observe higher values for China and Asia than for the U.S., and higher for the U.S. than for EU countries. Do these kinds of pressures, excessive competition and questionable academic practice go hand in hand? Given the importance of the number of publications in evaluating one’s academic performance, it is plausible to claim so. On the other hand, Grant Steen found in his 2010 study of PubMed database that papers from China were not more likely to be fraudulent than other papers. The data showed that the papers from U.S. authors were most likely to be fraudulent. But the central concerns regarding academia in China and worldwide remain.
"Under the misguidance of excess competition, many schools do not make it their objective to develop the whole person, but rather to achieve a good exam ranking," Jiang Kai wrote. He suggested a shift in approaching the problem and rethinking cooperation and competition. "Social development today should move beyond the era of competition into an era of cooperation, especially in terms of education," Jiang concluded.
Lise Wongensen Bach also believes that certain changes should take place. First of all, we should focus on quality, rather than quantity of the published papers, she argues. Apart from that, we need more discussions and reflections among the scientists themselves. "Researchers should ask each other – have you replicated your experiment? Did you get the same results after replication? Did you have the right numbers in the experiment? Did you exclude anything? We have to have these discussions with our peers and research groups, and make these reflections about our methods and findings," Wogensen Bach said.
Michael LaCour never became an assistant professor at Princeton. He never will. His career is over. But a lot of careers are still unfolding. A lot of research is to be conducted. Unless we change the way scientists are evaluated, focus on quality rather than quantity, and cooperation rather than competition, more people might be tempted to "cut corners" just to get their work published. And that would come at a price.
Drasko Vlahovic holds a BSc degree in Journalism and a Spec. Sci. degree in Political Science. He has worked in online media, radio and a think tank.