Undergraduate students’ perceptions of the role and utility of written assessment feedback

Richard Bailey
Northumbria University, UK

Abstract

Student dissatisfaction in higher education with written feedback on their assessed work is a topical issue given the publicised findings of the National Student Survey. This paper presents excerpts from interviews with students about their experience with written feedback together with analytical and interpretive commentary. The context of the research was a post-92 university with a wide range of higher education provision and a commitment to widening participation and student retention. The paper begins with an overview of feedback studies in higher education and a summary of current agendas. The data and analysis are presented in three sections. A final discussion section outlines some salient findings from the research

Keywords: written feedback; student learning and academic literacy; institutional practices

Written feedback: new practices and new concerns

Hounsell (2003) comments that the provision of feedback on students’ written work is in steep decline and enumerates the following causes: the semesterisation of the teaching year; modularisation of courses to facilitate flexibility in curricula but with the condensation of taught components into semesters; the consequent end-loading of assessment; formal procedures around quality assurance (marking procedures and external adjudication); the impact of large class sizes and increased marking loads. There are two major consequences: teachers have less time to write comments on students’ work and there are fewer opportunities for tutorial interactions. In these conditions it is difficult for feedback to work optimally. For one thing tutors are often sceptical students will read and value their comments because in a high turnover and assessment focused system students are increasingly perceived as instrumentally motivated – focused on marks rather than the educational value of written comments (Higgins, et al, 2002). Furthermore, assessment feedback in the written form may be the only feedback students receive and consequently they get little clear input from tutors on developing their overall academic literacy (Catt and Gregory, 2006): a responsibility that often devolves on non-embedded support staff (Orr and Blythman, 2003).

In the current climate of change and reform the institutional response has been to standardise and systematise the teaching and learning context by introducing quality assurance measures in greater number. Structured feedback forms or ‘pro-formas’ especially in large undergraduate (semesterised and modularised) courses are increasingly used in the delivery of written feedback. The following factors have motivated this shift: a concern with greater transparency and equity in assessing students; achieving greater consistency across (and within) departments; QAA requirements emphasising formal articulation of criteria and learning outcomes (Hounsell, 2003). Critics point to a pervasive techno-rationalism in the processes and procedures of the academy, regulating and sanitising teaching and learning interactions (Orr, 2005) and enforcing uniformity with the replacement of ‘trust’ by documentation as ‘contract’ (Morley, 2003). Lillis (2006) points out that this state of affairs sustains and reinforces the monologic nature of pedagogical and communicative practices in teacher-student interactions: conformity and control is emphasised over contestation and negotiation.

The growing formative assessment literature in higher education espouses a constructivist paradigm of teaching and learning which is promoted as student-centred. For example, Hounsell (2003) distinguishes between ‘extrinsic’ and ‘intrinsic’ feedback. The former is transmitted to the student after a task is completed and would include written feedback. The latter is part of the learning experience of students while engaged in academic work and is therefore embedded and continuous. Nicol and MacFarlane-Dick (2006) outline seven principles of good feedback in accordance with this notion to empower students as ‘self-regulated learners’. Hounsell (2007) advocates ‘enhancing the congruence’ (Biggs, 1999, 2003) of feedback with curriculum goals and teaching, learning and assessment strategies. As Haggis (2003) points out this is the sort of reasoning that is valued and rewarded in higher education largely because it is backed up by a long standing research tradition and literature and claims that both teaching-learning contexts and the capabilities of students can be moulded to achieve and fulfil higher education goals.

The academic literacies approach (Lea and Street, 1998) on the other hand, focuses on student academic literacy development as situated and contextualised social practice. This notion challenges the assumption that language is a transparent medium and that meanings are uncomplicatedly transmitted from teacher to learner in feedback interactions (Lea and Street, 2000). Given that it is concerns over written feedback in both its quality and substantive nature that persistently arise in consultations with the student body, academic literacies theorising adds an important dimension to researching and understanding the student experience. However, in the present climate of higher education critical perspectives are marginalised in favour of the constructivism underpinning the research on formative learning and assessment.

The current situation raises a few questions about the role and capabilities of learning and study developers, especially those working directly with students in the capacity of study skills advisers. Learning developers are often cast in the role of demystifying the practices of higher education and mediating between course requirements and learning support needs. This extends to (at least in the perception of many students) the meanings associated with assessment criteria and the language of feedback, both of which have become increasingly intertwined. On the other hand, learning developers are not privy to tacit and less tangible practices that are part of pedagogical interactions in the disciplines.

The research data

The research comprised semi-structured interviews with student respondents who came from a cross-section of disciplinary areas. Interviews were conducted with individuals and small groups of up to three students. Some students were approached directly through contact with their subject teachers while others were approached through the study skills centres. Participants were traditional and ‘non-traditional’ (the latter often being mature-age and returnees to education in applied and vocational subject areas). All student participants were undergraduates and some studying one or two year diploma (represented by teaching staff in those curricular areas as equivalent to at least advanced degree level) courses in nursing and social work. Many of the sample were students following joint degrees or on modularised courses in applied and vocational areas. The departmental and disciplinary background of students is given in parenthesis following each of the data excerpts. Additional information is also included in some instances using square ([ ]) brackets. Questions asked by the interviewer are italicised in the excerpts. Initially the following questions were asked:

What do you like/dislike about written feedback?
What do find useful/less useful about written feedback?

The emphasis was on open and exploratory talk to allow respondents to consider their experience and perceptions more reflectively and in depth. In the course of the interviews reference was made to the form and delivery of feedback. In some cases students were able to produce documentation in the form of structured feedback sheets and assessment guides.

The data are presented under three headings:

(1) Students’ attitudes and responses to feedback in general

Students often said they attach importance to receiving and reading feedback and are willing to take notice as the following excerpt illustrates.

For me it is important to work on the feedback. The best feedback is that which highlights the flaws in an essay. When it is critical it is most positive. (second year, joint-honours, humanities)

Advice is also valued where it is linked to tangible improvements such as better grades; in other words students value feed-forward.

What I find most useful is advice on specific ways I can improve. I prefer comments like ‘if you change this you can get X%’ and so on (second year, applied sciences)

In general students were more vocal on the shortcomings of feedback on their assessed work. Unsurprisingly the brevity and generality of comments, for example, all too often leave students in the dark.

‘The essay doesn’t flow’ is a comment I get frequently. I still don’t know what they mean. Red pen on your work or just exclamation marks; what’s the message? If there is no more explanation students just leave it and move on (second year, nursing degree)

The next excerpt indicates a perennial concern for students over the utility of written feedback. It is not feedback or feed-forward they value but actual contact time with tutors to follow up.

I like meeting with the tutor to talk about my written work. It is all very well getting feedback comments such as ‘you should have developed this more’. The question is: well how? (first year, humanities)

For others, there is a further concern associated with their interactions with teaching staff.

Students are reluctant to talk to tutors [before writing] because they feel they might look foolish but you have to make the effort to talk to teachers if you are in doubt (mature-age, second year, social sciences)

The following is a mature-age student’s account of why he sought the assistance of the learning and study support specialist.

I came [to the study skills centre] to get some feedback on my assignment. I got 50%. The marker had put question marks on much of the work. I had to guess what half of them meant. This is always a grey area. I put it down to me not coming from an academic background (mature-age, second year [direct entrant], social work degree)

The clear implication is that students such as the ones in the last two excerpts assume that it is a deficiency within them that is the basis of the problem. They have different inclinations: the first would like to be proactive and is aware that her interests are best served through direct contact with the academic member(s) of staff who set the assessed work but is inhibited about doing so; the second seeks diagnostic help after the event and assumes that the learning and support specialist can provide it.

(2) Demystifying the language of feedback

It is not just feedback per se but the register and discourse of academic life for non-traditional students.

All these big words! You would like it in layman’s terms but I suppose that wouldn’t be academic. No one wants to admit they are not sure what things mean; no one wants to stand out. (mature-age, first year, nursing diploma)

The following is an example of uncertainty about the meanings and register of feedback that occurred on numerous occasions with students in applied and vocational areas. It indicates that there is a dissonance between what teachers write and intend and what students know and understand.

C) We get smarmy comments like ‘you’re not at level 5 now’. We have to work out what she wants by passing on her comments.
H) We’ve got the nursing skills but it is just the academic writing
What do other classmates think?
C) Everyone would generally like more information on what they mean. They use terms like level 5 but they don’t explain what they mean.
H) I want to know what level 1, 2 3 and 4 are! [both laugh] (third year, undergraduate nurses)

A nursing student points out another complication that students face with the language of feedback: teachers don’t mean the same thing when they use the terms associated with assessment of and commentary on students’ work.

When tutors use words like ‘critical evaluation’ nobody challenges them. Teachers when they try to explain words like ‘analyse’ don’t do it in the same way. ‘Analyse’ and ‘discuss’ – they don’t really mean different things do they? Feedback is very useful but if you ask for extra tutorial time you simply get directed to the study skills centre (mature-age, first year, diploma)

The second comment indicates why students sometimes seek learning and study support and the messages that are given to them, albeit implicitly about what this provision is there to do. The following excerpt is from an interview with traditional A-level entrants in the second year of a single honours Sociology degree.

Is the word ‘structure’ clear to you?
S) Yeah, clear
A) An essay structure is clear, yeah
Has anybody ever really explained words like this to you?
S) In the first year we had a ‘skills’ module. There was some help with essay writing and understanding titles in that [semester one, first year]. But really at A-level you get a feeling for these words; ‘discuss’ and ‘explain’ for example

On two occasions interviewees produced front sheets with tutor comments on them. A second year student studying a joint degree who could make no sense of the following comments on his essay:

…it lacks proper academic subtlety,
…there is an endemic vagueness throughout
…writing skills will make or break a piece of work of this length

An international student (from China), and a direct entrant to the third year of study, was left guessing when confronted with the following praise “it does so in a systematic and logical way”. When asked to explain the phrase “Well structured and argued” the student pondered “is this telling me I have done well in the essay or is it suggesting to argue and structure more?” Asked why she hadn’t consulted her tutor directly she replied “I would like to ask my teacher but he is too busy”. In such instances students have few options, carry on regardless, try to work with peers (and risk accusations of copying and collusion) or book a session with a study skills adviser in the learning development unit.

(3) Dealing with forms and feedback instruments

The following excerpt is from a small group session with students in the third year of study who were asked to comment on a structured feedback form used in their subject area.

E) The stuff on the left hand side [the pro-forma categories] has been crossed out as if the comments [global] have been written to cover all those areas
Do you understand these statements clearly, for example ‘appropriate depth of analysis’ [feedback comment on the form]?
H) Yes
Did you always know what a comment like this meant?
H) Not in the first year. I didn’t have a clue what they were about. I didn’t know what the things on the left hand side meant until I got a mark for it. But with a mark in the high 70s what else can they say?
What if you had got a low mark, say in the 40s?
H) I’d be knocking on the door for an explanation
E) There’s no analysis here, just very general comments
H) Normally mind, we do get the essay back with comments in the margins as well.
E) That’s better. I spoke to a student in our year who got a high mark and usually does well in his work. Everything on the left-hand side of the form was crossed out. The comments on the right next to each category just read ‘as usual’! I think they [tutors] have a sense of humour too! [both laugh] (third year undergraduates, social sciences, mature-age)

The teacher crossed out the categories on the left which atomise the feedback (categories included words like ‘structure’, argument’, ‘use of evidence’ for example) and wrote global comments. The assumption seems to be that in the third year of study students have attained a performance level that renders the form superfluous. The students are nonetheless critical and point out that they used to receive comments on their scripts. Quality assurance measures are restricting the extent tutors are able to do this. The students are unequivocal about how they would have reacted had they received low marks for their work in their first or second year and only had structured feedback to rely on. The excerpt also reveals the tacit and social practice nature of tutor-student interactions in which, in this case at least, neither teacher nor student sees the form as central in the feedback process

The following excerpt from a traditional student indicates some generic problems that all students are likely to experience with forms.

Tutors often leave out the ticky box stuff. In our study group students don’t bother because if something is ‘partially achieved’ there is no elaboration on it. I’ve never had a tutor write a comment that relates to a ticky box. They possibly use them as a guide. They don’t make it explicit and they expect you to sort it out. Students want explanation. You can get an essay where all the ‘partly achieved’ boxes are ticked. But if the mark is good I won’t look any further. (second year undergraduate, joint honours, humanities)

Tutors do not always write comments that relate to the tick-boxes, and they can be indistinct about what they intend. Another student felt that the pro-forma categories were vague and concluded that the tutor obviously felt the same: s/he had consistently ticked on the line between two of the boxes all the way through! Students want explanation and are left frustrated by the cursory and oblique nature of what they get. A humanities student has his own ideas about how to change the form and what improvement could be made.

What would you change about this feedback form?
I’d get rid of ‘partly achieved’. It’s the most vaguest thing I’ve ever seen. There should be a line or some space for the tutor to qualify what he has written and why he has chosen one or the other. It seems that the form is based on administration. I think students read the mark but otherwise they don’t take that much notice. (third year, joint honours, humanities)

In the following instance a detailed and unmodified form is presented to students in an applied and vocational area without any clarification.

They use a pro-forma. There are boxes with 0-10 and criteria next to each one. In the lower box it is ‘describe’, in the middle it is ‘describe and compare’ and in the higher boxes it is things like ‘analyse’ and ‘evaluate’. People don’t really get it. If you asked students what these words mean I can guarantee 90% of them wouldn’t really understand them (mature-age, second year undergraduate, health and nursing)

The interviewee was disorientated and she feels she is speaking for other students. It is as if the teacher sees the proforma as a simple check-list: a set of self-evident components in the assessment process. In a constructively aligned teaching context where meanings are not given but created in the learning activities and learning is incremental (Biggs, 1999, 2003) it may not seem surprising that a student is confused by big words such as ‘critically analyse’ or ‘evaluate’ even in the second year of degree study. What appears interesting in the above excerpt is that neither teacher nor student seems to understand the contents of the form or how to use it.

Discussion

The evidence of this research suggests that students, at all levels, frequently find the language of feedback comment inconsistent and vague and are confused about the meanings of assessment criteria. Students want explanation and are left frustrated by the cursory and oblique nature of the feedback they get. Students who have performed well in their written assessments find that ‘general comments’ lack substance. They would prefer more detail for formative development but they, too, are caught in what Hounsell (1987) refers to as the ‘cycle of deprivation’. This research also suggests that this confusion extends to other areas of the discourse of higher education. Students are meant to acquire an understanding of this through the documentation (module guides, etc) they receive as they progress through their studies. They are exhorted to write in order to demonstrate their learning, meet criteria and satisfy outcomes at the appropriate level (’you are not in level 5 now!’) but they have difficulty connecting with this formal and remote discourse and this continues to be the experience of some students well into the advanced stages of study. It is not their language.

The evidence also suggests that practices around structured feedback instruments have deleterious effects on the teaching and learning interface. Firstly, standardisation restricts writing on scripts and reduces teacher comments on forms to a minimum. Secondly, pro-formas which embody assessment criteria do not recognise that words such as ‘structure’ or ‘argument’, which routinely appear are likely to be contextually - discipline and even module - specific (Lea and Street, 1998). This is a potential problem for students on joint degrees and those on modularised courses in applied and practice-based areas (as most in this sample were). The forms assume a transmission model of teaching and sustain monologic practices (Lillis, 2006) arguably widening rather than narrowing the comprehension gap for many, especially non-traditional, students. Furthermore, the use of forms is leading to routinisation in the provision of (written) feedback compounding the negative effects of the preceding points. Finally the data reveal that teaching staff can be inconsistent in their use of these instruments. They may ignore the protocol of the form altogether crossing out or writing over the categories. Students are sceptical and assume forms are based on administration rather than supporting them with their learning and academic literacy development.

Students want more than comment and criticism, or to be left to compute through logical deduction or inference what is intended; they want to know ‘how’. When this is missing there are limited options: carry on regardless or ‘book a tutorial’. The latter is not always available. Feedback should be linked into the tutorial system but this is happening less and less because teachers have fewer opportunities to meet with students outside timetabled sessions. Students may work together in order to decipher the written feedback they receive (they want things in ‘layman’s terms’). A problem with this strategy is that students are more likely to use an informal register in order to decipher feedback. Either way the situation arguably works against the internalisation of the language of outcomes and assessment criteria, negatively affects the presumed benefits of constructivism/alignment and accentuates the dissonance between students and the institution and its practices.

What are the implications of these findings for the role and capabilities of learning developers? Learning developers are differently positioned and have varying remits according to particular institutional arrangements and requirements. However, an emerging question is in what ways are the boundaries between academic and learning support roles shifting and changing? What factors are, or will, determine this? It is an area of enquiry learning developers are exploring in their own contexts and collectively in order to engage in discussion and debate regarding the nature and demands of their role and consider the wider implications for institutional practices, development and change. The Higher Education Academy Draft Strategic Plan for 2008 – 2013 states that a core strategic aim is to disseminate evidence-informed approaches to enhance the student learning experience. A question relevant to critical debate about higher education practices in general and around the current and future role of learning development is not just ‘what’, but ‘whose’ evidence is valued.

References

Biggs, J, (2003) Teaching for Quality Learning at the University. (2nd Ed) Buckingham: Open University Press

Biggs, J, (1999) ‘What the student does: teaching for enhanced learning’. Higher Education Research and Development, 18, (1): pp.55-75

Catt, R. and Gregory, G, (2006) ‘The point of writing: Is student writing in higher education developed or merely assessed?’ pp. 16-29 in: Ganobscik-Williams (ed.), Teaching Academic Writing in UK Higher Education: Theories, Practices and Models. Universities in the 21st Century series. Palgrave. MacMillan

Haggis, T, (2003) ‘Constructing images of ourselves? A critical investigation into ‘approaches to learning’ research in higher education’. British Educational Research Journal, 29, 1: pp.89-104

Higgins, R., Hartley, P. and Skelton, A, (2002) ‘The conscientious consumer: reconsidering the role of assessment feedback in student learning’. Studies in Higher Education, 27, 1: pp.53-64

Hounsell, D, (2007) ‘Towards more sustainable feedback to students’, pp. 101-113 in: Boud, D. & N. Falchikov, (eds.) Rethinking Assessment in Higher Education: Learning for the Longer Term. London: Routledge.

Hounsell, D, (2003) ‘Student feedback, learning and development’, pp. 67-78 in: M. Slowey & D. Watson, (Eds.) Higher Education and the Lifecourse. Open University Press

Hounsell, D, (1987) ‘Essay writing and the quality of feedback’, pp. 109-119 in: J. T. E. Richardson, M. W. Eysenck & D. W. Piper, (eds.) Student Learning: Research in Educational and Cognitive Psychology. Milton Keynes: Society for Research in Higher Education and Open University Press

Lea, M. and Street, B, (2000) ‘Staff feedback: an academic literacies approach’, pp. 62-81 in: Lea. M. and Stierer, B, (eds.) Student Writing in Higher Education: New Contexts. Open University Press.

Lea, M. and Street, B, (1998) ‘Student writing in higher education: an academic literacies approach’. Studies in Higher Education, 23, 2, pp. 157-172.

Lillis, T, (2006) ‘Moving towards an ‘academic literacies’ pedagogy: dialogues of participation’, pp. 30-45 in: Ganobscik-Williams (ed.), Teaching Academic Writing in UK Higher Education: Theories, Practices and Models. Universities in the 21st Century series. Palgrave MacMillan

Morley, L, (2003) Quality and Power in Higher Education. The Society for Research into Higher Education. Open University Press.

Nicol, D. and MacFarlane-Dick, D, (2006) ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback’. Studies in Higher Education, 31, 2: pp.199-218

Orr, S. (2005) ‘Transparent opacity: assessment in the inclusive academy’, pp. 175-187 in: C. Rust (ed.) Improving Student Learning: Diversity and Inclusivity. The Oxford Centre for Staff and Learning Development

Orr, S. and Blythman, M. (2003) ‘An analysis of the discourse of study support at the London Institute’, pp. 175-184 in: G. Rijlarsdam (Series ed.) & L.Björk, G. Bräuer, L. Rienecker & P. Stray Jörgensen (Volume eds.), Studies in Writing, Volume 12, Teaching Academic Writing in European Higher Education. Kluwer Academic Publishers: Netherlands

Author details

Richard Bailey is a former senior lecturer in the School of Arts and Social Sciences at Northumbria University. He is currently completing his full-time doctoral study in the areas of teaching, learning and student academic literacy development in the contemporary context of higher education