When assessments fail the test
On 15 January 2021, a story on the BBC website reported on an A-Level student who started a campaign to ‘scrap any external exam assessments’ stating that, “he was concerned the ‘quality of learning’ for pupils had been so varied that it would be unfair to expect students to sit externally set assessments”. The student added: “there needed to be a system that considered the ‘varying circumstances’ in which pupils were studying”.
This is an admirable endeavour by this young man and, in response to the global pandemic, his position seems eminently reasonable. It does seem unfair to give the same externally set examinations to students whose quality of learning has varied and who have had to study under varying, shifting and differing circumstances. That is why he is calling for “grades to be decided by teachers” as was done for all GCSE and A-Level assessments last year throughout the UK.
Wait a minute. Unfair to assess students who have had different quality learning experiences and who come from varying circumstances? But isn’t this what GCSE and A-Level exams do and have done every year? The evidence is obvious. In fact, the Department of Education’s published research consistently highlights the difference that learning experiences and circumstances have on assessment outcomes. And these have not changed since the Effective Pre-School, Primary and Secondary Education project (EPPSE) started in 1997.
“Overall, the latest results confirm and extend earlier EPPSE findings. The life chances of some children are shaped by important individual, family, home and school experiences from an early age. There is no level playing field at the start of school or in later phases. These effects of disadvantage emerge at a young age and measures of individual student, family and neighbourhood characteristics continue to shape students' later academic outcomes through subsequent phases of their school careers. It is widely recognised that England has a large equity gap in achievement in international comparisons and that life chances and social mobility are highly stratified”. (DfE, EPPSE, 2014, p. 20)
Is the problem that our assessment system is unfair? If we dare to admit that it is or might be, then what is the alternative? How else can we select students for university, apprenticeships and employment? There has been growing international recognition in both universities and among employers that our conventional ways of assessing students are at best flawed and are often unhelpful indicators of competency and potential. Why do so many students drop out of university or change their courses? Why do employers frequently have to spend so much money and time on training new recruits?
The experience of Big Picture Education Australia is that students who have tested their interest in a field while at school are more likely to choose their future well and to excel at university, further training or in a career. Big Picture Education Australia with Melbourne University has piloted and proven a break-through learner credential which is accepted by 18 universities. Big Picture Australia has developed this work with Big Picture Learning schools across the world in mind and has named it the International Big Picture Learner Credential (IBPLC) which is a personalised assessment that evaluates and recognises the capacities, experiences and qualities of graduates more comprehensively than traditional exam-based certification systems. The impetus is to put the ‘person’ back into educational assessment so that young people exiting schooling do so with a rich, customised portrait of their abilities that offers meaningful, accessible information to end-users in the wider community, while allowing students significant agency in the way they are represented.
The IBPLC is a flexible assessment tool providing a fair, equitable and culturally unbiased evaluation of student knowledge and competencies. As no two students have the same interests, a personalised approach to final-year assessment is used to provide a fair and balanced assessment that adequately portrays a student’s distinctive learning, achievement, competencies and potential.
Uncoupled from traditional subjects, students are assessed using 6 national Assessment Frames that describe developmental progressions in the areas of: Knowing how to learn, Empirical reasoning, Quantitative reasoning, Social reasoning, Communication and Personal qualities. Student results are represented in a digital transcript known as a Learner Profile, supported by student curated interactive links. It is warranted by the Assessment Research Centre at the University of Melbourne and validated by assessment and psychometric experts.
In view of the current assessment dilemma in the UK, which many students, parents, educators and employers consider a crisis, and in the face of clear evidence of the unfairness of GCSEs and A-Levels, it would seem wise and timely to consider the merits and the applications of the IBPLC. Our assessments are failing the Covid test, they have been exposed as inadequate and inflexible across the educational systems of the UK in response to the pandemic, and even more strikingly in their inability to evaluate the competencies, knowledge, talents and potential of students whose quality of learning has varied and who have had to study and learn under different and inequitable circumstances. The IBPLC is flexible and adaptable, providing a personalised and holistic approach to assessment, and is already in use, recognised and accepted by universities and employers in Australia. This represents the future for schooling and assessment, it is the long awaited vaccine and offers hope and a true alternative for our A-Level student campaigning for external assessments to be scrapped.
Dr. Scott Boldt is an Associate of Leeds Beckett University. Based in Belfast, he holds an M.Ed. from Trinity College Dublin and a Ph.D. from University of Jyvaskyla, Finland. His research interests include Curriculum Development and Assessment, Alternative Education, and Learning Theory.