Skip to content

There is a wealth of literature that describes the purposes of providing feedback as part of the learning process in higher and professional education. I’m going to distil this voluminous research and scholarship into four key purposes.

Firstly, feedback for student learning is about increasing capacity for future actions. Indicating to the student how a piece of work, in-class contribution or whatever form of evidence, is provided by the students could be made better next time is about increasing their capacity. It’s human nature to think, “ok, well that’s task is completed, I passed, let's just move on”, but understanding how to do better, even in an imaginary ‘next time’, builds capacity.

This relates to the second purpose, developing self-awareness or metacognition in the student. Giving students the sense that even if there isn’t going to be another opportunity to provide evidence of learning in the same way again, there will be similar activities, tests, trails or exams and what they learn from the current feedback can be transferred into this new context.

Which leads us on to the third purpose of feedback, of developing academic skills. Poorly designed assessment might just be testing content knowledge, and it's very hard to provide meaningful feedback on such assessment. If on the other hand your assessment is well constructed, against distinct learning outcomes and using a meaningful marking rubric, then the feedback students receive should also be developing the academic abilities and skills beyond what they can recall.

The fourth and final purpose for providing feedback for student learning is to enhance the self-confidence and well-being of the student. Whether your feedback is providing confirmation of progress and success on the part of the student or providing supportive corrective guidance to a struggling student, the purpose remains the same, to bolster a positive attitude to learning, to the subject, to the practices associated with the discipline.

If you are struggling to meet these four core purposes in providing feedback to your students, you may want to think about reading a practical guidebook on providing feedback, enrolling on a professional development programme or just get together with your colleagues, and go through a course redesign or re-evaluation. You could invite a consultant to review your practices. You may find that your assessments and your in-class learning and teaching activities could be better designed to make providing meaningful feedback easier for you, and more useful for your students.

Simon Paul Atkinson
www.sijen.com
Consultancy for International Higher Education

There are social conventions, unwritten rules, around feedback in a formal education setting. Most students associate feedback as coming from the voice of authority in the form of red marks on a written script! It is important to redefine feedback for university and professional learners.

In this short overview video (3'30") Simon outlines four 'contractual' arrangements all faculty should establish at the outset of their course or module with respect to feedback for learning.

These are
1) ensuring that students know WHERE feedback is coming from
2) WHEN to expect feedback
3) WHAT you mean by feedback
4) WHAT to DO with the feedback when it's received.

  1. Feedback is undoubtedly expected from the tutor or instructor but there are numerous feedback channels available to students if only they are conscious of them. These include feedback from their peers but most important from self-assessment and learning activities designed in class.
  2. Knowing where feedback is coming from as part of the learning process relieves the pressure on the tutor and in effect makes feedback a constant 'loop', knowing what to look out for and possibly having students document the feedback they receive supports their metacognitive development.
  3. Being clear with students as to what you regard as feedback is an effective way of ensuring that students take ownership of their own learning. My own personal definition is extremely broad, from the feedback one receives in terms of follow-up comments for anything shared in an online environment to the nods and vocal agreement shared in class to things you say. These are all feedback. Knowing that also encourages participation!
  4. Suggesting to students what they do with feedback will depend a little bit on the nature of the course and the formal assessment processes. Students naturally enough don't do things for the sake of it so it has to be of discernable benefit to them. If there is some form of portfolio based coursework assessment you could ask for an annotated 'diary' on feedback received through the course. If its a course with strong professional interpersonal outcomes (like nursing or teaching for example) you might ask students to identify their favourite and least favourite piece of feedback they experienced during the course, with a commentary on how it affected their subsequent actions.

What's important is to recognise that there are social conventions around feedback in a formal education setting, normally associated with red marks on a written script! It is important to redefine feedback for university and professional learners.

Simon Paul Atkinson (PFHEA)
https://www.sijen.com
SIJEN: Consultancy for International Higher Education

In response to a question from a client, I put together this short video outlining four types of assessment used in higher education, formative, summative, ipsative and synoptic. It's produced as an interactive H5P video. Please feel free to link to this short video (under 5 mins) as a resource if you think your students would find it of use.

Books links:

Book cover pokorny_warren_2016https://amzn.to/2INGIgq

Pokorny, H., & Warren, D. (Eds.). (2016). Enhancing Teaching Practice in Higher Education. SAGE Publications Ltd.

Book Cover for Irons 2007
https://amzn.to/2INh4sq

Irons, A. (2007). Enhancing Learning through Formative Assessment and Feedback (New edition). Routledge.

 

 

 

Book Cover for Hauhart 2014https://amzn.to/2IKdzD3

Hauhart, R. C. (2014). Designing and Teaching Undergraduate Capstone Courses (1 edition). San Francisco: Jossey-Bass.

 

 

 

Book Cover for Boud 2018https://amzn.to/2sgnTaz

Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work (1 edition). Abingdon, UK: Routledge.

Some recent work with programme designers in other UK institutions suggests to me that quality assurance and enhancement measures continue to be appended to the policies and practices carried out in UK HEIs rather than seeing a revitalising redesign of the entire design and approval process.

This is a shame because it has produced a great deal of work for faculty in designing and administering programmes and modules, not least when it comes to assessment. Whatever you feel about intended learning outcomes (ILOs) and their constraints or structural purpose, there is nearly universal agreement that the purpose of assessment is not to assess students 'knowledge of the content' on a module. Rather the intention of assessment is to demonstrate higher learning skills, most commonly codified in the intended learning outcomes. I have written elsewhere about the paucity of writing effective ILOs and focusing them almost entirely the cognitive domain (intellectual skills), with the omission of other skill domains notably the effective (professional skills) and the psychomotor (transferable skills). Here I want to identify the need for close proximity between ILOs and assessment criteria.

It seems to me that well-designed intended learning outcomes lead to cogent assessment design. They also suggest that the use of a transparent marking rubric, used by both markers and students, creates a simpler process.

To illustrate this I wanted to share two alternative approaches to aligning assessment to the outcomes of a specific module. In order to preserve the confidentiality of the module in question some elements have been omitted but hopefully the point will still be clearly made.

Complex Attempt to Assessment Alignment

Complex Assessment AlignmentI have experienced this process in several Universities.

  1. Intended Learning Outcomes are written (normally at the end of the 'design' process)
  2. ILOs are mapped to different categorizations of domains, Knowledge & Understanding, Intellectual Skills, Professional Skills and Attitudes, Transferable Skills.
  3. ILOs are mapped against assessments, sometimes even mapped to subject topics or weeks.
  4. Students get first sight of the assessment.
  5. Assessment Criteria are written for students using different categories of judgement: Organisation, Implementation, Analysis, Application, Structure, Referencing, etc.
  6. Assessment Marking Schemes are then written for assessors. Often with guidance as to what might be expected at specific threshold stages in the marking scheme.
  7. General Grading Criteria are then developed to map the schemes outcomes back to the ILOs.

 

Streamlined version of aligned assessment

streamlined marking rubric

I realise that this proposed structure is not suitable for all contexts, all educational levels and all disciplines. Nonetheless I would advocate that this is the optimal approach.

  1. ILO are written using a clear delineation of domains; Knowledge, Cognitive (Intellectual), Affective (Values), Psychomotor (Skills) and Interpersonal. These use appropriate verb structures tied directly to appropriate levels. This process is explained in this earlier post.
  2. A comprehensive marking rubric is then shared with both students and assessors. It identifies all of the ILOs that are being assessed. In principle we should only be assessing the ILOs in UK Higher Education NOT content. The rubric will differentiate the type of responses expected to achieve varies grading level.
    • There is an option to automatically sum grades given against specific outcomes or to take a more holistic view.
    • It is possible to weight specific ILOs as being worth more marks than others.
    • This approach works for portfolio assessment but also for a model of assessment where there are perhaps two or three separate pieces of assessment assuming each piece is linked to two or three ILOs.
    • Feedback is given against each ILO on the same rubric (I use Excel workbooks)

I would suggest that it makes sense to use this streamlined process even if it means rewriting your existing ILOs. I'd be happy to engage in debate with anyone about how best to use the streamlined process in their context.

%d bloggers like this: