Two researchers from the University of California at San Diego recently found that social science studies with higher numbers of citations were also more likely to fail to replicate results. In an effort to determine whether the popularity of a social science study was related to its validity, they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.
While the failure to replicate research results doesn’t necessarily mean a study’s outcome is incorrect (research methods and cultural changes among participants also can change over time), it does cause one to think carefully about whether a study with a sensational title or a lot of citations and/or re-tweets is reliable. A recent article in Science Magazine included comments from other scientists expressing concern. Computational biologist Thomas Pfeiffer said in the article that extra safeguards are needed to bolster the credibility of published work, like a higher threshold for what counts as good evidence, and more effort to focus on strong research questions and methods, rather than flashy findings.
In the world of information and disinformation, it is good that researchers are examining the tension between the desire to publish and be cited and the impact that may have on the strength of the evidence cited or selected for inclusion. And for those of us consuming the products of these social science studies, we may have to be even more wary of a catchy headline.
This post very politely avoids the issue of researchers’ confirmation bias (intentional or unconscious), which is often behind questionable research conclusions and sensational titles. Many researchers go into a project with preconceived notions of the results, and the temptation to “cook the books” is sometimes too alluring to resist if their research fails to support their ideas.