In the third post in his series looking at barriers to effective research communication and uptake, Andrew Clappison turns his attention to the drive for evidence based policy and governing evidence use. Two key themes that could shape the future of research and its use, and have huge implications for public policy more generally.
Governing research evidence: What you need to know about the big evidence debate
Evidence-based policy is in vogue this season. It’s the new black, or in research uptake circles, the new research comms. OK, this shift is not that new, but over the last 5 years the emphasis has changed within the UK government circles from “effective research communication” to “evidence informed policy-making”. While there has been this shift, in reality these two things are very interrelated (just don’t say this too loudly within any government department).
I’m not a policy wonk, so why should I care?
While it’s not always easy to see it, your research could well have the potential to improve policy and practice. In the opening blog to this series, I introduced a number of issues undermining the efficacy of research communication and uptake. Of these, ‘Evidence’, its applicability, quality, and accessibility, is undoubtedly one of the biggest stumbling blocks to effective research uptake.
If you are an early career researcher, you may not be a mover or shaker in policy circles, yet! The nature of your research may mean that you never are, but even so it could still be worth your while reading on. Because it’s vital that if you get the opportunity to engage with the world of policy makers, you understand what is being said. You may or may not need to know the difference between a Randomised Controlled Trial (RCT) and a Systematic Review, but understanding the frailties of the way evidence is systemised into policy could inspire your research communication work, or even the direction of your research.
Quality counts. What about Peer Review? Yeah, right!
Three painful words for seasoned academics: forget peer review. Sadly, peer review falls under much criticism when we start to address the quality of research. Unconscious biases such as ‘herding’, where reviewers are influenced by their peers; and publication bias’, where positive results have a better chance of being published than negative results are two often cited examples of why peer review just won’t do[i].
In a well know paper, John Ioannidis from Stanford University argued that most published findings are probably false. Drawing on 1000+ citations in some of the top medical journals he found that of those that had claimed their results had been tested, 41% were found to be in fact wrong or with limited impact[ii]. Undoubtedly, this problem is persistent and one that needs to be overcome. Sadly, peer-review is not systematic or robust enough to offer the kinds of quality assurance that policy makers can draw on with confidence.
What does good quality evidence look like?
If you have read anything on research quality recently, you might think that the only way to achieve quality is through an RCT or similar method consisting of experimental or quasi-experimental design (Sounds complicated doesn’t it. Well that’s because it is! And this is part of the problem). I would be wary of falling into this trap, but for sure RCTs are seen as the ‘gold standard’ in terms of research quality. Just ask Ben Goldacre.
Ultimately, the emerging picture of what is deemed as quality research holds RCT’s in the highest regard. The two illustrations below are common representations of where different research methods sit in terms of quality. Where does your research sit?
Figure 1. Two illustrations of simplified hierarchies of evidence, based on the design of the study[iii]
We have the quality research, but how do we systemise it into policy?
RCT’s may be an expression of quality, but having the evidence available doesn’t mean it is ready for use or has been tested for systematic use. In reality, we need to compile all the relevant evidence and test it. And how do the policy wonks suggest we do this? Systematic review, of course!
Systematic review represents a move away from the literature review, once so popular in policy circles. Literature reviews often fall foul of bias, with authors tending to cherry pick the evidence that best fits their own beliefs. We may do our best to try and be neutral, but social reality makes it hard to avoid bias!
Systematic reviews seek to remove bias, using explicit methods to identify and test what can be reliably said, in a way that could be replicated by others and further tested. This sounds wonderful doesn’t it? Undoubtedly, bias can be ‘mitigated’ by choosing a robust design method (see list of resources on design methods at the end of the blog). But, it’s important to recognise that no research method is free from bias. When considering systematic reviews as suitable evidence for policy making, we also have to ask a few questions such as:
- Is it really viable in terms of resources?
- Can policy really be formed through such a time-consuming approach?
- And what happens when policy needs to be made in a hurry (as it often does)?
The Alliance for Useful Evidence assures us that many systematic reviews are already ‘in the bag’. Really? Should we believe that? No, of course we shouldn’t. It’s not like the system is there just waiting to be utilised. There are libraries, such as those curated by the Cochrane and Campbell Collaborations that do hold a large number of reviews, but we are a million miles away from having a ‘one stop shop’ of reviews, or being able to roll out a systematic approach to integrating evidence into policy.
In fact, the focus on systematic reviews begin to fall apart a little in the face of reality. Policy decisions often need to be made quickly and none of these approaches offer a ‘gold standard’ approach that works at appropriate speed. Rapid Evidence Assessment (REA) (I know, it sounds like it’s been named by Q in an 80’s James Bond movie!) is one approach that is seen favourably. Anything done in hurry is unlikely to be terribly robust, but nevertheless, it may well be better than alternatives such as cherry-picked literature reviews, and it is at least a recognised option.
The Randomista crowd rule, but evidence is still lost in the long grass
The Randomista crowd (aka RCT supporters) are undoubtedly in the ascendency, but there is still a massive problem in terms of systemising the use of evidence in policy. This is not just about access, but also about the capacity to use and understand evidence. The rise of RCT’s and systematic reviews has seen many government departments push for evidence-informed policy and shy away from their former approach of encouraging the somewhat universal communication and uptake of research. They don’t want to know about research now, unless it is deemed ‘gold standard’.
Rightly so, but government departments now need to, to bring back the research intermediaries (like myself! I acknowledge the bias, let’s move on…), make them trendy again and use intermediaries to fill in knowledge gaps, to communicate good research , and to help support the effective use of evidence to build more effective policies.
To move forwards, all parties also have to acknowledge that RCT’s, systematic reviews and the like are not a silver bullet. No evidence is formed free from social precepts; no perfect system exists, but we can improve the way we use evidence. We must also be careful about ensuring we continue to question the evidence and make sure that institutions calling on evidence do not see that as a means of grossly manipulating social discourse and building narratives that are not accurate. RCT’s, with their complex design methods are alien to many researchers, policy actors and the general public, butwe can’t let that become an excuse for poor communication.
Have your say
What do you think of the current focus on evidence? Are we getting it right when trying to systemise its use? Have your say by commenting on this post or sending us a tweet.
Further reading:
If you are interested in finding out more about research quality, the following sources offer useful guidance.
The Bond Evidence Principles and checklist – specially designed for NGOs
DFID – How to Note: Assessing the Strength of Evidence
HM Treasury – The Magenta Book: Guidance for Evaluation
Research Councils, Universities UK et al. – UK concordat to support research integrity
References:
[i] Easterbrook, P.J., Gopalan, R., Berlin, J.A. and Matthews, D.R. (1991) Publication bias in clinical research. ‘The Lancet.’ 337(8746). 867-872.
[ii] Cited in The Economist (2013) ‘Unreliable research; trouble in the lab.’ 409(8858). 24.
[iii] Using Evidence for Success: A Practice Guide; written by Jonathan Breckon, edited by Isobel Roberts, Alliance for Useful Evidence, Nesta – This guide is particularly useful and explores many of the issues in this blog in further detail.
[…] the previous blog in this series, I stressed the continued importance of research communication in the drive for effective use of […]