Leeds Beckett University - City Campus,
Woodhouse Lane,
LS1 3HE
What will REF2029 look like?
The Research Excellence Framework (REF) is the UK’s national research assessment exercise, designed to ensure accountability for the use of public research funding. The results of the REF are also used by the four Higher Education funding bodies to inform the allocation of approximately £2billion per annum of recurrent Quality-Related (QR) research funding. In this blog post, Professor Silke Machold, Pro Vice Chancellor for Research and Innovation, shares what we currently know about REF2029, what this means for LBU, and how we are planning and preparing for our submission.
Here at Leeds Beckett University, we receive annually £4.6 million through REF – an increase of £2million or 80% compared to the funding we received from REF2014. This increase happened as a result of higher quality research that was submitted to REF2021, along with more colleagues being included in the submissions.
The REF is based on the principle of assessment by peer review. Organised into four Main Panels – A Medicine, Health and Life Sciences; B Physical Sciences, Engineering and Mathematics, C Social Sciences and D Arts and Humanities – there are 34 Units of Assessment, each of which comprises of subject experts that are appointed by the Funding Bodies to contribute to the development of the assessment criteria and carry out the assessment.
We submitted 15 Units of Assessment to REF2021 – spanning all the four main panels. And four of our colleagues participated in the work of the REF2021 Unit of Assessment panels – three as assessors (in UoA4 and UoA24) and one colleague as part of the REF secretariat.
Following the conclusion of each assessment round, the funding bodies commission a review of the exercise, which then informs future assessments. The Future Research Assessment Programme (FRAP) reviewed REF2021 and reported its findings in 2022 and 2023.
So, what will the next REF – likely REF2029 – look like? The answer is, we do not yet know. Whilst some initial decisions were announced, other aspects are still subject to consultation, or are being piloted, or are yet to be translated into policy documents.
But we have an idea about the direction of travel for the next assessment. There are three things that stand out for me.
First, the Funding Bodies confirmed that the assessment will continue to be made through peer review.
This was not a foregone conclusion given the advances in and possibilities of Artificial Intelligence, combined with the current cost of the exercise for the Funding Bodies and universities alike.
But a unique experiment conducted by researchers at the University of Wolverhampton, which used different AI models to predict the actual REF2021 panel assessment scores, showed that the accuracy of the AI models was highly variable depending on the Unit of Assessment and the AI model used.
Whilst peer review is not free of biases, it continues to be the best method we currently have for evaluating the quality of research. Last year, we conducted our first REF stock take in which we trialled peer review via Symplectic Elements, and we look to embed this further over the coming years. We have also signed the San Francisco Declaration on Responsible Research Assessment – DORA – to help us further embed principles of open and responsible assessment.
The second element of REF2029 that stands out for me is the change in emphasis away from the assessment of research conducted by individuals in subject areas and towards a broader assessment of institutions.
The REF2021 pilot assessment of the institutional research environment is likely to become a part of the assessment for People, Culture and Environment, which will also be weighted more heavily than the previous Research Environment assessment.
No longer will institutions submit lists of colleagues aligned with each Unit of Assessment, but instead data from the annual Higher Education Statistical Agency (HESA) staff return will be used to determine the volume of outputs and impact case studies that need to be submitted, with no minimum or maximum number of outputs ‘assigned’ to colleagues.
On the face of it, these changes should ease pressure on individuals and reduce the burden of the exercise on institutions.
But important questions remain. Especially in large universities like ours, there are differences in the maturity of research environments and cultures. Without wanting to get too technical – how would differences in unit-level and institutional environment be reconciled to produce unit-level scores?
And whilst few would disagree with the sentiment that institutions should submit outputs and not staff, it is researchers who produce the outputs, not institutions. This creates technical issues, such as which outputs are eligible for submission, but more importantly it raises ethical questions. Should universities select what are considered to be the highest scoring outputs only, regardless of who authored these? Or should the outputs submitted be in some form representative of the breadth and depth of research and researchers? These are matters that we as a community need to work through in the coming months and years.
The third and final change that stands out for me is the increasing weight assigned to People, Culture and Environment (PCE).
Whereas in REF2021, outputs counted for 60% of the score and environment for 15%, the proposal is to decrease the weighting of the output score to 50% and increase the (revised) PCE section to 25%.
There is a lot to like about this. We know from research conducted by funders such as the Wellcome Trust (2020) that competitive pressures driven by inappropriate metrics, poor working environments, and lack of recognition and support are detrimental to both researchers and research. Increasing the weighting for PCE sends a powerful signal to the sector that the UK’s main research funders value creative, supportive, and inclusive environments.
But there are problems. Most obviously, as researchers from fields such as organisation studies can testify, culture is a tricky concept to validly and reliably assess. The Royal Society’s definition of research culture as encompassing “… the behaviours, values, expectations, attitudes and norms of our research communities” really brings to the fore its complexity.
Striking the right balance between inputs to research culture (time, money, expertise), processes and their outcomes, as well as the robustness of the assessment versus the burden it places on institutions to collate evidence will be key questions that the current PCE pilots need to address.
In early summer, we ran our own Research Culture survey to hear from colleagues across LBU where we need to focus our energies to create a better research environment, and we will share the results shortly.
So, if there are so many unanswered questions about the next REF, how can we sensibly plan and prepare for our submission?
Well, the best thing we can do is focus on doing high quality research. We have many projects underway that are highly original, use innovative methods and make a positive difference to people’s lives. And there are many research group, school, and university-based opportunities where colleagues share their knowledge and experiences, learn, and collaborate.
If you want to get involved, contact the team or email me directly.
Professor Silke Machold
Silke Machold, PhD, is the Pro-Vice Chancellor Research and Innovation at Leeds Beckett University.