Earlier this week, on 17th December 2014, the UK’s higher education community awaited the publication of results from the “Research Excellence Framework” (REF). Universities’ core funding is expected to depend on the REF, which assessed their research in the period 2008-2013. REF uses a star-based system, with the highest category, 4* meaning world-leading research, and 3* being internationally excellent. (Read more about what the REF is, on TheConversation.com)
At midnight, the results were released and on 18th December, the academic and UK media was full of stories relating to the REF. It was not only the traditional media, but also social media that was full of interest and engagement on this theme, at a level which should dispel any notions that academics do not use social media!
On Twitter, the activity really took off. On 18th December alone, there were hundreds of tweets with the hashtag #REF2014, from organisations and individuals, displaying a variety of perspectives.
The blogosphere also provided commentary, with Dorothy Bishop of the University of Oxford using HEFCE’s open data to predict how funding might be allocated across institutions engaged in psychology research, based on a former RAE formula (RAE was the predecessor to REF). Jo Wood at City University created a great, clickable visualisation of the REF 2014 results, and Rebecca Titchiner compared REF scores with RAE scores in another visualisation.
Some commentators focused on identifying those who came out of the assessment well, whilst others suggested that the exercise itself is flawed. Many University official channels claimed that their institution or department had done particularly well, and this elicited comments to the effect that we can now evaluate how good at “spin” each institution is. “REF fatigue” inevitably showed through some tweets, before the day was through.
After these results, no doubt there will be plenty of ongoing reflections about how researchers can improve the impact and international significance of their research, to reach world-leading standards. International collaborations can enable researchers to learn from each other, just as research that crosses disciplines can apply the best aspects of those disciplines to the theory under investigation. Crossing such boundaries is exactly what Piirus supports researchers to do.
I’m always curious to hear about other ways in which research performance is measured, on a national scale. Do share your thoughts on “how do you measure research?” with us below (Leave a Reply) or tweet @piirus_com.