Over the past decade, the National Student Survey (NSS) has provided important insights into students’ perceptions of, and satisfaction with, their educational experience at university.
The NSS has many detractors in the sector and one can understand why. Concerns about it are even more widespread now that data from it is feeding into the Teaching Excellence Framework (TEF), despite the fact that there’s no proven link between student satisfaction and teaching excellence.
However, the NSS has succeeded – perhaps more effectively than some of us might like – in revealing problems with assessment in Higher Education (HE). Whilst students are often relatively happy with the standard of teaching, the quality of learning resources and the wider learning environment and assessment is an area where NSS scores have consistently been notably lower. The cynics amongst us might argue that this is entirely predictable and unavoidable since assessment is so closely linked to marks and marks drive degree classifications (outcomes).
But assessment doesn’t have to be the proverbial ‘weak link’ in the student experience. As experience at ARU clearly demonstrates, targeted development and interventions focused around enhancing the design of assessment, feedback, and marking practices can fundamentally change student satisfaction with this aspect of their experience.
The truth is that, as a sector, we deserved some of the criticisms and low scores awarded to us by our students in the NSS. Too often assessment had been allowed to become overly conservative, boring, meaningless, and divorced from the kinds of knowledge or skills that students might require to function as successful professionals in the real world of work. Too often we were relying on a narrow range of assessment types. Too often we made poor use of summative assessment, often giving students feedback at a point in time when it was of no use or relevance to the students we were writing it for.
In certain subjects, academics were occasionally guilty of massive over-assessment, with students forced to complete so many summative assessments that there was little if any time available to do anything else. All too frequently, academics were labouring under the false assumption that students wouldn’t engage with an assessment task of any kind unless marks were attached to it. This resulted in the over-utilisation of summative assessment and an almost complete absence of formative assessment – the kind of assessment that allows students to engage with learning-focused activities without fearing the award of a mark at the end. In other words, assessment tended far too often to function solely or primarily as a measure of learning, rather than as a means of building learning. To make matters worse, marking criteria or marking systems and the means of moderating them often lacked transparency for students. Feedback was often phrased in such a way that it was difficult or impossible for students to know what they needed to do to gain better marks in the future.
Regardless of the many limitations of the NSS, this annual survey has revolutionised sector practice, and galvanised concerted efforts within the academy to improve assessment design, marking, transparency, and feedback mechanisms. One doubts that, in the absence of the NSS, these improvements would have been driven with the same speed, urgency and senior-level support. The sector has rapidly and impressively improved, and our students are the beneficiaries.
One area where change has been most notable, and also very welcome, is in the area of formative assessment. Formative assessment and formative learning activities can be a powerful driver for learning, whilst allowing students to express themselves and their learning in a low or zero-risk context. Building formative assessment activities that prioritise active engagement and collaborative learning, rather than the award of marks, not only helps to build deep learning, but has other benefits. Recent research in the sector has shown that the fear of failure has become a major factor in the declining mental health of students. They are so concerned with the perceived negative impact of poor marks (even when the marks received are not really poor at all) that assessment becomes a demotivating and destructive process. The love of learning for its own sake is replaced by anxiety, fear and unhealthy study strategies aimed merely at working towards the test and getting a good mark, rather than maximising integrative learning. By removing the fear of failure, and focusing on assessment as a driver for learning rather than a measure of learning, formative assessment can help to promote healthy attitudes to study, and positive study strategies, whilst also helping to promote mental wellbeing. Of course, summative assessments of one kind or another are still needed as a way of driving classification, but effective use of formative assessment can help to minimise the anxiety that is so often associated with it.
Formative assessment has other advantages. It provides opportunities for academics to give students feed-forward – i.e. feedback that helps them to adjust their study strategies or to focus on extending their reading etc. prior to submitting their final summative or graded piece of work. It provides academics with feedback on student progress and whether concepts, ideas, theories or arguments have been understood, and to put in place 'scaffolding' or targeted interventions when necessary. In other words, it is just as useful to us as it is to our students. Both parties benefit.
So, consider the balance between the number, kinds and purposes of formative and summative assessment on your own modules. How much activity around assessment is focused on building learning rather than measuring it? Do your summative assessments need to be summative? Can some of them be used more effectively as formative activities? Freed from the need to award marks, does the concept of the formative, low-risk assessment open up creative opportunities to do something different, exciting, even revolutionary? Is each of your summative assessments accompanied by a preceding formative exercise which allows your students to experiment with their understandings of the key content and access feed-forwards that helps them to refine their learning and the strategies they utilise to demonstrate it? Think about the feed-forward you provide in your formative assessments? Is the feedback clear, transparent and focused on things that students can actually action – i.e. take deliberate steps to improve their practice and their summative outputs?
Sometimes you’ll come into contact with colleagues who hold the view that students won’t engage with formative assessment or assessment activities of any kind unless they count via the award of a summative mark. This is simply not true. My own experiences and those of thousands of academics across the sector is that students are entirely happy to engage with formative assessment where its purpose and rationale is clearly understood and where its value is explained in relation to the subsequent summative assignments they will be set. To put it bluntly, academics are too quick to blame students for not engaging with formative assessment when, in reality, it’s more likely that this reflects a problem in the way that it has been designed, explained, implemented or marked. Relevance is critical.
Fortunately, there are lots of example of creative and dynamic formative assessment that lecturers can adapt or adopt, irrespective of the subject they teach. See, for example, the Formative Assessment Toolkit that I developed with the assistance of colleagues at the University of East Anglia. Another great resource is the CADQ Guide on Formative Assessment developed by Nottingham Trent University and the excellent resources on Formative Assessment provided by the Higher Education Academy.
This blog was original published on SEDA. Read the original.