By Mark Learmonth.
Who are we talking to when we write our articles? Does our research make any difference to the world ‘out there’, or are we talking exclusively to fellow academics? The UK government has taken the line that too often academics have simply been talking to one another in their research papers. So they are actively encouraging us to try and make our work matter outside academia, and now measure the impact of our work officially. In this measurement exercise, impact is defined as: “an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia.” Indeed, institutions are now being rewarded (both in cash and in increased reputation) for being able to demonstrate this kind of impact on the world. Here’s my own personal take on some of the key debates.
The Research Excellence Framework
Impact was measured for the first time as part of the 2014 Research Excellence Framework (REF), the UK-wide system for assessing the quality of research in UK higher education institutions. The REF is an assessment which: “provides accountability for public investment in research and produces evidence of the benefits of this investment … [it also provides] benchmarking information and establishes reputational yardsticks, for use within the higher education sector and for public information”. This means, among other things, that the quality of the research conducted in each institution – and within their different schools and departments – can all be ranked against one another using a common metric. My business school in Durham, for instance, came 20th out of the 100 and odd business schools in the UK. In other words, REF matters, and it matters a lot! Impact was a significant factor – counting for 20% of our overall score. One of the implications of REF mattering so much is that everything must be officially defined in great detail – including what counts as impact.
The impact of red-tape
I won’t bore you with the minutiae of the regulations. It’s enough to say that the way impact was measured was through schools producing case studies that had to be written according to pre-defined criteria. A key issue was to be able to demonstrate convincingly that the “effect change or benefit” we were claiming for our research was in fact linked directly to the research. This was no easy task, given how multi-faceted any such change is likely to be. Even when, in common-sense terms, research had clearly had an impact, we could not always make out story fit into the formal requirements set out for impact case studies.
The impact of impact
It is interesting to reflect on the cultural changes that the UK’s experiment with impact (and there are certainly no plans to abandon it) may have brought about. The worst effects of the nay-sayers have not come to pass. Even though impact counts for 20% of overall REF scores, the case study format (for all its faults) has at least meant that, in practice, only a relatively small handful of research articles need to have had impact in order for schools still to score highly. So, at least as far as the REF is concerned, blue-skies research can continue much as before. Furthermore, the recent Stern Review, an evaluation of REF 2014, has recommended significantly broadening the criteria used to measure impact in order to address some of the acknowledged difficulties with the current approach. And although some academics remain cynical about the whole issue, most of us are buying in to the agenda, at least to some extent. After all, does anyone really want to conduct research that never influences anything (other than, perhaps, getting a handful of other academics to agree with us)? I, along with most of my colleagues, now have a section on our curriculum vitae headed “impact” in which we suggest how our research might matter to the wider world.
Would I recommend “impact” for Denmark?
Personally, I’ve changed my views about impact since 2009. Like a lot of other academics, I’m naturally suspicious of governments imposing anything on us. Still, overall, I am now pretty positive about the impact of impact. The doomsday scenarios about the end of blue-skies work and neo-liberal appropriation have not come about. And on a more positive note, the impact agenda has helpfully raised the question of why we do the work we do, and made us think about who might be interested in it. I now find myself turning some of my academic articles into blogs for a general audience, in part, as a potential “pathway” to impact. Here’s an example. So, as long as it’s done sensitively and in consultation with the academic community, I don’t think you have much to fear about the impact of impact were something similar ever to be introduced in Denmark.
Mark Learmonth is Professor of Organisation Studies/Deputy Dean (Research) at Durham University Business School. He spent the first 17 years of his career in management posts within the British National Health Service. Prior to taking up his post in Durham he has worked at the universities of Nottingham and York. You can follow him on Twitter.