8-Stage Learning Design Framework

Student Profiles Contexts Media Choices Intended Learning Outcomes Assessment Learning and Teaching Activities Feedback Evaluation

Developing a Meaningful Assessment Strategy

31 July 2018

Citation: Atkinson S.P. (2018) Developing a Meaningful Assessment Strategy. Retrieved from https://sijen.com/research-interests/8-stage-learning-design-framework/5-assessment

Knowing what our intended learning outcomes (ILO) are, enables us to design meaningful assessment that provides opportunities to students to evidence their learning against those ILOs. It is important to initially identify which outcomes across different domains of learning can be combined through assessment. This allows us to manage the assessment load, for both faculty and student, whilst ensuring all ILOs are assessed. Using taxonomy circles we can then draft marking rubrics for the appropriate level that represent all the guidance that individual assessors and students need to guide their practice.

Before we explore the different forms of assessment that are available to learners, we should begin with some fundamental questions.

Philosophical Construct

This is a (tongue-in-cheek) representation of the way we, in the West, have come to understand how assessment works. We are informed, however unwittingly, by our cultural conditioning which sees ‘perfection’ as close to unobtainable. Here’s a Judeo-Christian perspective on ‘divine assessment’:

View of Divine Assessment!

This profound cultural influence is also one of the reasons why our past mark is (generally) 50%, lowered by convention to 40% for undergraduates! and why we have uneven grade boundaries. This meant that historically marks were awarded on a bell-curve, if everybody scored highly or poorly within a cohort, the curve was adjusted to make sure the grades remained appropriately distributed.

Illustration of Classic Bell Curve Grade Distribution

In the above model, ‘normal’ people are likely to converge in the middle. It’s the reason that until very recently ‘norm referenced’ assessment was the norm (!). This meant that everyone was assessed against their peers, they could never be better than a pre-determined ‘best’ and the majority were expected to be ‘average’.

A final point on the underlying philosophies of assessment is to encourage you to reflect on the conventions that exist in your discipline and institution. Below is an illustration of the way that grade boundaries have been widely adopted across the UK higher education sector. Institutions will vary but this is a broadly accurate profile. The first line shows the percentage marks assigned, the second row a typical Masters level grade descriptor and the third row a typical undergraduate degree classification. Grade Boundaries

How is it possible that 40% of the grades at the lower end represent failure, 30% at the top the higher possible grade and everything else compressed into that middle 30%? This is not to propose an alternative approach, only to encourage you and your design team to reflect on your current practice before embarking on designing fresh assessments.

Reminder of Terminology

For those who are new to designing assessment here are a few definitions to support your course team discussions.

Forms of Measurement

The first is to be aware of how your students’ assessment will be measured. There three forms of measurement (grading)

Measurement Forms

I do not believe that norm-referenced is appropriate in higher education and so do not intend to spend time on it. I acknowledge that students very often want to know how well they are doing with reference to others but it is more helpful for them to understand how they measure up to the criteria. There is some impact here are the introduction of learning analytics and students dashboards which I have published about elsewhere.

Your module or programme reporting after the assessment has been assessed will generate a graph which may indeed look like a bell-curve, but that is no reason to ‘mark to the curve‘. Instead, I believe for the majority of higher education we should be examining criterion-based assessment. Where there are professional competency frameworks that overlay the design it may also be appropriate to provide ipsative assessment forms too.

Reliability and Validity

Two key concepts to bear in mind as you design assessment. These are validity and reliability.

Validity is a measurement of the degree to which the assessment actually assesses the desired outcomes (ILOs) for the module. Is the assessment measuring what it aims to measure?

Reliability is a measurement of the consistency or replicability of the assessment. If I assess multiple cohorts over time will I get broadly the same results? Below are four illustrations of potential patterns of grades.

  1. The assessment produced consistent results (irrespective of the range of marks achieved) and the assessment is constructively aligned to the ILOs so that it is assessing what it is supposed to assess.
  2. The assessment is designed to assess the ILOs but produces varied responses that make duplication of the assessment across cohorts problematic.
  3. The assessment produces consistent and replicable results but is not assessing the ILOs. (a pitfall of using Multiple Choice Questions in the wrong context)
  4. The assessment is not assessing the ILOs for the module and is also producing hugely varied responses.

    Reliability and Validity Diagram
    Interpretations of Grade Distributions (red dots)

It is important to try and design with these principles in mind.

Objectivity and Subjectivity

Another issue that course designers need to remember is the question of objectivity and subjectivity in assessment. This is fraught with difficulties. Clearly it appears preferable at first glance to always be objective in assessing another’s skills or knowledge, but much of higher education’s skills and capabilities cannot be objectively measured. Explore the lists below:

Objective Subjective
Where a single correct response is possible. Where multiple interpretations of facts, circumstance or concepts exist.
Can be ‘machine-marked’ without human interaction to produce a grade. Requires ‘subjective’ interpretation by the assessor to arrive at a grade, hopefully, guided by meaningful criteria (see Rubrics below)
Uses ‘selected response’ or ‘structured responses’ Uses ‘free-response’ or ‘constructed response’
Includes Multiple Choice Questions, matching and alternative-choice items Include short-answer and essay questions, performance, presentations and so on.
Works well to assess knowledge and comprehension. Facts and unequivocal contexts in applications of facts. Works well to assess interpretations of factual knowledge, skills acquisition and capabilities in relation to varied contexts.

Types of Assessment

Now we have reviewed some of the fundamental mechanics of assessment, we are in a better position to ask, ‘why do we assess?’. Arguably we assess for some widely-agreed purposes; namely to ensure learning has taken place, to award earned qualifications and to ‘benchmark’ individuals against criteria assigned by the discipline or a profession.  To end this quick overview let us identify the various types of assessment most usually encountered in higher education.

Type of Assessment Function Purpose
Diagnostic
(sometimes ‘Needs’)
Establishes a baseline in existing knowledge or skills to structure future learning approaches. Allows learning needs and support mechanisms to be identified
Ipsative
(sometimes ’Benchmarking’)
Allows a student to measure their own performance over time.
Formative Provides learners with timely feedback and feed-forward on their learning progress Impacts on current learning engagement
Summative
(sometimes ‘Final’)
Provides learners with an opportunity to produce evidence of achievement against the defined module outcomes for the learning Provides progression and certification mechanism
Synoptic
(sometimes ‘Capstone’)
Provides learners with an opportunity to evidence of achievement against the defined programme outcomes for the learning, integrating learning acquired across different modules Provides progression across levels and certification mechanisms. Holistic.

What is being assessed?

Now we have reviewed some of the basic concepts in assessment in higher education, we can address some of the practical issues before exploring the ‘optimal’ design approaches.

Clearly, there are a great number of university programmes where the assessment, or the outcomes, sometimes even the content, is dictated by external bodies. Rather than abdicate our responsibility as learning designers, this is a call to understand how better to articulate the relationship between what the intended learning outcomes of a course are, how it is being assessed and what is being experienced as learning by the students.

Let’s begin by looking at what knowledge, skills or attributes we are trying to assess. Explore this table.

  • Do you agree with its suggestions?
  • Pause and think of exceptions where you have reliably and validly assessed defined skills using alternative methods.
ILO Domains of Learning (4/8-SLDF) What is being assessed Suggested assessment method  
Metacognitive (Personal Epistemology)) Declarative knowledge, answers that can be stated verbally. This includes specific facts, principles, trends, criteria, and ways of organizing events. [Metacognition]
  • Portfolio
  • Extended Essays
  • Dissertations
Portfolio
Cognitive (intellectual skills) Factual and procedural knowledge, knowledge of what is and how to do things.
  • Exams
  • Short answers
  • MCQs
  • Essay
  • Case studies
Affective (professional skills / values) Ethical skills, professional behaviours and attitudes, beliefs and feelings [Epistemology]
  • Case studies
  • Observation
  • Journal/Blogs
Psychomotor (manual skills) Technical skills (software or specialist tools)
  • Practical workshops
  • Illustrative evidence
Interpersonal  (communication skills) Communication ability, verbal and written [Cultural Values]
  • Presentations (podcasts, vodcast)
  • Video evidence

In many programmes, there is pressure now to assess a range of skills and behaviours beyond subject knowledge. The challenge is to design assessments that allow students to demonstrate a range of skills (across various domains) through a single assessment. Remember, all module outcomes must be assessed and passed and each outcome needs to be assessed only once.

Drafting an assessment framework is an iterative process. Ideally one designs the assessment at the same time as one writes module ILOs, ‘tweaking’ them to give them depth and flexibility at the same time. Once one has established that the best way of assessing several ILOs might be through a Case Study for example, the case study could change for each cohort without needing to rewrite the ILOs or change the learning and teaching content and activities. Next, we’re going to remind ourselves how we do that.

Reminder: Constructive Alignment

Let’s briefly revisit the concept of Constructive Alignment (see 4/8-SLDF) which is the idea that what is taught (the learning activities) is directly related to the student’s ability to evidence (assessment) their achievement of the intended learning outcomes (ILOs).

Students are taught through their educational experience that ‘final grades matter’ so it is natural that they become fixated on the final assessment. A transparent design that closely reflects the ILOs within the teaching activity and assessment will necessarily engage students in a broader and deeper understanding of their learning journey.

Given that all ILOs in a module must be passed it is important that you are realistic in how you intend to allow a student to evidence their achievement of them, in other words, how you plan to assess them. Ideally, a course should be designed with the ILOs first, then an assessment strategy and finally the learning activities and associated content comes last.

Clearly, in an externally dictated curricula, it becomes important to be able to ‘interpret’ the external guidelines into the language of higher education to gain the advantages from having a transparent constructively aligned course.

Choosing Assessment

In an ideal world, we would design assessment in tandem with the ILOs for a module, with direct reference to the ILOs for a Programme and all within a carefully mapped out assessment regime.

In such a set of circumstances the guidance is self-evident:

  • Assess each ILO once
  • Assesses combinations of ILOs across a range of different domains (skill sets)
  • Choose an appropriate assessment form that reflects the skills being assessed
  • Schedule assessments throughout a module to allow you to support student progression. Remember that ‘summative’ is not synonymous with ‘final’. ‘So-called ‘summatives’ do not have to come at the very end of a module.

So how do we go about designing (or re-designing) assessment that is constructively aligned? Below is an example of a 30 credit (300 studnet hours) postgraduate (Level 7) law module, in which;

  • All module intended learning outcome (there are 7) are assessed once.
  • There are two distinct assessments (balancing the load throughout the module)
  • Each assessment can be reworded without changing the essential structure (as plagiarism avoidance)
  • In the module specification, this assessment would appear as two cases studies both focussed on the ability of students to explain the law in real-life contexts.

Example of constructively aligne dassessment
The message here is that by combining outcomes form different domains it is possible to avoid over assessing students. By ‘coding’ your iLOs by domain it is possible to generate some interesting and engaging assessments.

It is important that whatever assessment is designed can be managed by the assessors! Acknowledging whatever degree of subjectivity is necessary within the discipline (see above) it is then incumbent o the course design team to provide guidance as to how to interpret, grade and feedback to assessments.

Marking and Designing Assessment Rubrics

Marking can sometimes be a tedious process. It needn’t be. If students are properly guided to generate well-structured evidence, it can be a fascinating and engaging process. In the following example, a marking rubric has been prepared for the second of the two example assessments above.

Note that the rubric, used by the assessor is also shared with the students at the very beginning of their course. This is important. This transparency removes any mystery of the assessment process. This means that when you design your course you should already have designed the guidance markers will use. If you are marking to the ILO you can provide feedback on work against eacspecificfc ILO using the description of the expected threshold content in the table.

Note also that this should be the ONLY marking guidance needed. There should be no need for different guidance for students and assessors and certainly no need for further guidance to assessors. While the assessment briefs (the ‘questions’) do not have to appear in your module and programme validation documents I suggest your marking rubrics should.

Example of a marking rubric

Examine the rubric above (click to enlarge) and consider how extensive would the modifications to the rubric need to be (if any) as the assessment brief changes but still aligned to these same ILOs!

Final Questions for your Course Team to reflect on…

How valid is your assessment? Are your ILOs meaningfully structured and are they being assessed? Are students able to evidence attainment against the ILOs?
How reliable is your assessment? Are you able to modify the details of the assessment whilst retaining aligning to the ILOs and a consistent rubric?
How much assessment is required? How many ILOs can or should be assessed by an individual piece of assessment?
How many domains are reflected in your ILOs that can be combined is assessments?
How creative can you be in aligning your assessment and ILOs to external guidelines, standards or competency frameworks
Undertake a mapping exercise of all module ILOs and assessments within a programme, exploring the weighting and timing of submissions.

In the next stage of the 8-SLDF we explore learning and teaching activities

References

Handley, K., Bryant, R., Rust, C., O’Donovan, B., & Price, M. (2013). Assessment Literacy: The Foundation for Improving Student Learning. Oxford Centre for Staff and Learning Development.

Pokorny, H. (2016). Enhancing Teaching Practice in Higher Education. (D. Warren, Ed.). SAGE Publications Ltd.

Stevens, D. D., Levi, A. J., & Walvoord, B. E. (2013). Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Publishing.