The Centre of Open Science and promoting openness in research
At Taylor & Francis, we believe in making research more open. This is because adopting open research practices supports the transparency, reproducibility, and replicability of results. This in turn improves the robustness of research.
We therefore have a number of open initiatives in place to support open research. These include our open access publishing options and data sharing policies. And now we’re delighted to announce that we’re a Transparency Openness Promotion (TOP) signatory. The TOP guidelines are a set of 8 standards that improve the transparency and openness of research.
We spoke to David Mellor, Director of Policy Initiatives at the Center for Open Science (COS) to hear his insights on open research, and how the TOP guidelines are helping move the open research landscape forwards.
How have researchers attitudes to openness in research changed over recent years?
Well, we know that researchers now recognize the extent of the credibility problems in most cutting-edge research. 90% of scientists agree that there is some degree of “replication crisis”. This is surprising because you don’t often see 90% of scientists agree on much!
I spend most of my time working with many different stakeholders about ways to increase replicability and transparency. This includes researchers, authors, editors, funders, and publishers. Fewer seem to dispute that these problems exist. When most researchers have tried, and failed, to replicate the results of previous work, often their own, it becomes obvious that there is room for improvement.
Culture change is challenging. Our strategy requires collective action. No single journal, publisher, or researcher typically feels empowered to take the first step. This is because of the perceived risk to doing so. Recognizing that we can improve practice has been the great success of the past few years.
What are some of the ways in which COS promotes transparency and openness in research?
At the Center for Open Science, we have a three part strategy.
- We conduct large scale replication studies in psychology, social sciences, and pre-clinical cancer biology. We can then identify barriers to replicability and to gain knowledge about unexpected factors that affect original findings.
- The lessons learned from these projects inform our policy efforts. These include the TOP Guidelines, Registered Reports, and Open Science Badges to recognize research that is more transparent than is typical. Editor’s note: find out more about Open Science Badges on Taylor & Francis journals here.
- We build and maintain the free and open source OSF (Open Science Framework) to support the researcher workflow. This platform gives researchers a collaborative space to manage research projects. It includes a registry, a data repository, and a preprint server. Our policy recommendations and experience working with researchers to preregister or share their data directly informs the features and tools available to use. So it’s an important part of our overall strategy to influence scientific culture towards more transparency.
Why is this so important?
It’s hard to overstate how important this is. The process of science really depends on transparent sharing and evaluation of underlying evidence. Skepticism is a core value of the scientific community, and we cannot skeptically evaluate evidence if evidence is not available. The wider world depends on the process of science for making decisions. And the scientific community must do everything possible to be as credible as possible. There is no other way around these issues. While there are of course barriers to total transparency, our mission is to identify and overcome those barriers, not to let them weigh us down and stall progress.
What are the TOP Guidelines?
The Transparency and Openness Promotion (TOP) Guidelines are a set of 8 standards which improve scientific reporting. They do this by increasing the clarity into underlying research data and materials. We created them to serve as a toolbox for policymakers such as journal editors, publication committees, publishers, and funders who wanted to implement better policies but faced barriers such as community support, conflicting standards, or simply lack of time to draft new language. Each of the TOP standards exist to solve a particular problem identified by those who have attempted to replicate previous research findings.
The standards on Data, Materials, and Analytical Code Transparency speak to the items generated over the course of the study and how you should preserve or share them. The standard on Design and Analysis Transparency points to discipline-specific reporting guidelines that remind authors to report key methodological details. The Preregistration standards emphasize the importance of keeping two modes of research distinct. One of these is Exploratory research, which seeks to make new discoveries and find unexpected trends. And the other is Confirmatory research, the purpose of which is to test a specific hypothesis. It is surprisingly easy to conflate the two modes of research and report the results of exploratory tests using tools designed for confirmation. Finally, the Replication standard makes it clear that direct replications are an important contribution to science, even though they’re traditionally harder to publish.
Each of the TOP standards can be implemented in one of three levels of increasing rigor: disclosure, requirement, or verification of the mandate. This removes barriers to implementing some standards while serving as a roadmap for improvement in years to come.
What do you think are the biggest opportunities of open research?
We see the Registered Reports publishing format as having the largest potential impact on the way science can be improved. This initiative takes the strengths of peer review, which is expert assessment of specialized content, and refocuses it earlier into the process of a study, before results are known. Editors and reviewers assess the importance of the proposed research questions and the ability of the proposed methods to address those questions. They can then give high quality studies “in-principle” acceptance. This is a promise to publish the results regardless of the outcome of the study. Knowing that null results are possible incentivizes reviewers and editors to think of ways to ensure researchers conduct their study to a very high standard, without the erroneous check of whether or not the main results are significant.
Registered Reports represent a major opportunity to address publication bias and refocus our attention to what matters; the methods and importance of the research questions. Authors love the format because accepted studies will be published as long as the study is conducted as promised. Funders love the format because they know the work that they support will get published no matter the outcome.
What do you think are the biggest challenges for researchers making their research open and transparent?
Researchers face pressures to publish, source funding, and get jobs. Right now, we often make these decisions using faulty indicators of scientific credibility. Deciding whether or not a study was important and well conducted is really hard; there are no shortcuts beyond critical evaluation and transparent assessment. But we use shortcuts all the time. The journal Impact Factor is the most widely known and misused metric for evaluating research rigor. That journal level assessment does not relate to the quality of any individual study in a high IF journal. The only way to overcome it is to make it easier to evaluate the underlying evidence. This is why we work to make it easier to showcase transparent access to data, materials, or preregistrations. Transparency does not guarantee that the work itself is important or of high quality. But it does make it possible for someone to make that judgement.
How do Open Science Badges enhance the openness of research?
The badging program allows researchers to indicate when underlying content is available for others to use or verify. We see several benefits from this program: badging is associated with higher rates of data sharing even without open data mandates, which can be difficult to implement. Another interesting association is the higher quality of data sharing. Typically, an author can state that data are available while not actually putting all of the related data into a recognized repository, perhaps mistakenly thinking that figures or summaries contain “all of the data.” When the badge criteria are in front of the author and an editor checks that the URL points to a repository, there’s a bit more room to improve rates of actually making data available. And not just reportedly making data available, which is an important distinction!
Finally, we see the badging program as a way to demonstrate how norms are shifting toward more transparent research practices. It’s easy to ignore the tide toward more open data and preregistrations, until you see reminders that peers are doing this every day, and are receiving recognition for these best practices.
Find out more about Open Science Badges on Taylor & Francis journals here.
David Mellor is Director of Policy Initiatives at the Center for Open Science. David received his Ph.D. in Ecology and Evolution from Rutgers University. His research interests have covered the behavioral ecology of cichlid fish, citizen science, and reproducibility. Find David online or on Twitter @EvoMellor.