Dr John Harrison, Associate Editor of Regional Studies, offers his advice to authors on maximising the impact of their work, and the most useful metrics for measuring it.
Personally, I disagree with the widely circulated mantra of “publish or perish”. Today, you can still publish and perish because there is now so much more published work than there was 1, 2, 5, 10, 20 years ago, and it’s more accessible than ever before.
In this publishing climate, the question for authors – and which editors increasingly focus on – is who is going to be interested, and why? To make an impact, authors need to make sure their work has an audience (i.e. is on a topic that many are interested in), and makes a contribution.
So my advice is this: it is not simply enough to say “I set out to research this, this is how I did it, this is what I found”. Work that makes an impact does much more than this. It reaches out and engages its intended audience: it says “Here is a significant issue of broad relevance, this is my contribution that represents new knowledge and deepens understanding, this is why it should be of interest to you”. In other words, all work that has an impact addresses the ‘so what?’ question.
At this point I will often ask people to think about what I refer to as the “80/20” maxim in academic publishing. This derives from my sense – backed up by some data and research I have done – that roughly 80% of citations (impact) in journals or book series’ come from just 20% of the total outputs. This means that only 20% of citations come from the remaining 80%. What does this tell us? The remaining 80% is where you can publish and perish. As an editor this is what we are always looking for: can we spot, attract, and develop papers so we have more of the 20% type and less of the 80% type?
The advice to authors on this is: don’t leave it for the editor to decipher. As the author, always remember that you are the expert on this topic. If you are not clear and confident about the contribution and likely impact, neither will the editor or the reviewers.
So how do you know if your work is making an impact? The obvious metric is citation data, but using it requires some attention and interpretation.
Citation data is read in three main ways. Firstly people look at the raw number of citations that a publication or researcher has. But you are often not comparing like-for-like. The raw number does not account for time since publication of an output, or the number of years a researcher has been publishing and therefore accruing citations.
Secondly people look at an author’s H-index. This is seen to be a more rounded assessment, reflecting the breadth and depth of an author’s impact. Take two authors who have 1000 citations each from 30 published outputs. Both authors appear identical. But now imagine that Author A has a H-index of 20 (meaning 20 of their outputs have been cited at least 20 times), while Author B has a H-index of 12 (12 outputs have achieved at least 12 citations). Because the H-index measures productivity and impact, Author A appears more consistent in the impact of their published work. Particularly in the early stages of an author’s career the H-index can be a good indicator of a potential “one hit wonder”.
Thirdly people look at author trajectory. This again can be important in assessing two authors with similar citation numbers or H-index scores, or two pieces of work with a similar number of citations. Clearly if the results have been achieved in half the time, this is a much more significant result.
The challenge is that citation data is the end point. The suite of currently available metrics also allows us to track how many times people have looked at, read, and used (cited) scholarly output. This can be extremely useful for tracing why published work does or does not have the desired impact.
We need to consider the rate of attrition: the numbers decrease noticeably from those who look, to those who read, to those who use. The metrics available give us an understanding of when the audience, and therefore the potential impact, is lost. The result is two burning questions and considerations for authors:
- Why do those who look not read? There could be an obstacle putting people off – most likely the title or abstract.
- Why do those who read not use? The work may not engage the reader – most likely because it fails to go beyond saying “I set out to research this, this is how I did it, this is what I found.
From this we can see that the way to improve ‘impact’ is to increase the number at the ‘view’ stage of the process, and/or to minimise the rate of attrition. The former puts a premium on visibility; the latter a premium on quality.
Nevertheless, it is also important to think beyond citations because this metric only measures impact among peers within the academic community. It does not, for example, capture the impact of published work on students, or among the wider scientific community comprising both academics and non-academics.
The factors that affect the level of impact of a piece of work are complex. While metrics can help us understand some of the driving factors, there is no golden rule that will work for every piece of work, or every author’s career. It will be interesting going forward to see how the development of metrics will help shed light on the two fundamental questions highlighted above: why do those who look not read, why do those who read not use?
Dr John Harrison is Reader in Human Geography at Loughborough University. He has published extensively in the multi- and inter-disciplinary field of urban and regional studies and is an Associate Director of the Globalization and World Cities (GaWC) research network.