Skip to content

graphic illustrating the four themes this article explores

Workshop aligned to UKPSF A5, K1-K3, V4

Tertiary providers are increasingly expected to deliver 'work-ready' graduates. This is a challenge when we must acknowledge that many graduates will begin a career that does not exist today. Identifying the competency frameworks within our disciplines, and those of our professional colleagues is a good place to start. We can then identify a range of graduate attributes that will underpin our programme outcomes and inform the development of the real-world assessment.

It is important to question all of our assumptions about the context into which our learning design is intended fit. Despite the fact that you may feel you know your learning context intimately the chances are there will be some contextual evolution. How much has your discipline context evolved in recent years?.

See the research pages for this stage for more details and guidance

Our learning programmes are designed to reflect our institutional specialisms and priorities, to play to our strengths. Sometimes we risk forgetting that they will be taken by real people with different dispositions, orientations and perspectives. For this reason, the first stage of my 8 Stage Learning Design Framework is concerned with profiling students. Much the same way that any product designer asks 'who is my customer going to be?', we need to do the same thing in education. I'm sharing here a process in which we describe between four  (five or six works too) imaginary individuals who are as different from each other as we can imagine as our potential students. I advocate a diversity matrix to build these characters, articulating different dispositional, educational, circumstantial and cultural parameters. We then run these user-cases through four further perspectives to reassess their discipline orientation, learning orientation, personal context and social context. There are overlaps and contradictions, a creative tension that reveals the strengths and relative weaknesses of any potential programme design. See the Research pages for more of Student Profiles

Watch out for workshops at Learning Design Workshops

 

This an introduction to a new resource being shared on this website, the 8-Stage Learning Design Framework, or 8-SLDF for short. The framework provides a supportive step by step process to enable faculty and course designers to develop robust and well-aligned programmes or modules. Publication of the 8-SLDF is in preparation so only brief explanations are provided but resources will be shared over time with associated commentaries. These blog posts will find a permanent home on the research pages of this site too.

Graphical representation of the * Stage Learning Design Framework
8-SLDF (©2016)
O: Overview

I believe that the best way of ensuring that students and faculty can both engage in a meaningful, positive and fruitful learning collaboration is by designing courses well.

By well, I mean that courses that are constructively aligned, relevant to the real-world experience of students, engaging and transparent. Courses must also be cultural and socially aware. Students need to know why they are being asked to perform learning tasks and we should always have an answer. Knowing 'why' an activity matters because it is the first step in any individual's self-reflective process, their metacognition and the development of their personal epistemologies (Atkisnon, 2014). We also need to know 'why' because doing anything for the sake of it is clearly wasteful of our time and energy. We as faculty are valuable players in the relationship between our students, the discipline, our institution and the wider world. Being good at what we do makes a difference. Designing courses that enable us to be better at what we do simply makes sense.

The 8-Stage Learning Design Framework has had a long gestation. It has its foundations built through my educational development practice around the work done by John Biggs on constructive alignment (2007) and the SOLO taxonomy (1982). I then incorporated work by Anderson and Krathwohl's reworking of Bloom's cognitive domain taxonomy (2001) alongside others domain development, including the original Bloom project's articulation of the affective domain (1956), Dave's psychomotor domain (1967), and my own interpretations of Metacognitive and Interpersonal domains.

The issue of the effective materials design was inspired by the Open and Distance learning world (pre-digital), particularly by Derek Rowntree (1994) and Fred Lockwood (1994), on my collaborations with Kevin Burden around the DiAL-e Framework (2009) and my own scholarship around the SOLE Model (2011). More recently I have drawn inspiration from the work of James Dalziel and Gráinne Conole (2016), and Diana Laurillard (2012), in their learning design conceptualisations, particularly as it relates to learning activities.

The result is I believe a comprehensive, flexible and adaptable learning design framework not just for activities but for entire courses, module and programmes. It is an appropriate framework regardless of the discipline, level, context or mode of learning. It is a framework for any adult, formal, learning context.

See the research pages to follow this resource development

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing : a revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Atkinson, S. P. (2011). Developing faculty to integrate innovative learning in their practice with the SOLE model. In S. Ferris (Ed.), Teaching, Learning and the Net Generation: Concepts and Tools for Reaching Digital Learners. Hershey, PA: IGI Global.

Atkinson, S. P. (2014). Rethinking personal tutoring systems: the need to build on a foundation of epistemological beliefs. London: BPP University College.

Atkinson, S. P. (2015). Graduate Competencies, Employability and Educational Taxonomies: Critique of Intended Learning Outcomes. Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education10(2), 154–177.

Biggs, J., & Collis, K. F. (1982). Evaluating the Quality of Learning: Structure of the Observed Learning Outcome Taxonomy. New York: Academic Press Inc.

Biggs, J., & Tang, C. (2007). Teaching for Quality Learning at University: What the Student does (3rd ed.). Buckingham. GB: Open University Press.

Burden, K., & Atkinson, S. P. (2009). Personalising teaching and learning with digital resources: DiAL-e Framework case studies. In J. O’Donoghue (Ed.), Technology Supported Environment for Personalised Learning: Methods and Case Studies (pp. 91–108). Hershey, PA: IGI Global.

Conole, G. (2016). Theoretical underpinnings of learning design. In J. Dalziel (Ed.), Learning design: conceptualizing a framework for teaching and learning online (pp. 42–62). New York: Routledge

Dave, R. H. (1967). Psychomotor domain. Presented at the International Conference of Educational Testing, Berlin.

Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1956). Taxonomy of Educational Objectives, Handbook II: Affective Domain. New York; David McKay Company, Inc.

Laurillard, D. (2012). Teaching as a Design Science (1 edition). New York: Routledge.

Lockwood, F. (Ed.). (1994). Materials Production in Open and Distance Learning. London: SAGE Publications Inc.

Rowntree, D. (1994). Preparing Materials for Open, Distance and Flexible Learning: An Action Guide for Teachers and Trainers. London: Routledge.

This morning's short coffee note identifies some of the functions, and their advantages, in establishing 'enhancement circles'. For faculty working in a less than supportive context, this works well as a collegial mechanism for personal, and collective, enhancement.

Reaching out to colleagues outside your immediate sphere, from different disciplines or institutions, you can establish a personal supportive network. I think between three and six colleagues Is about the right number. This means you can usually find someone to exchange practice with but you won't run out of people too quickly. Remember that peer observations do not necessarily have to be done in person. In theory, you could be observed by someone in a different country providing they have access to video recordings of your practice. Just make sure that you provide your observer with some specific guidance, requests of what it is you want them to comment on explicitly. Focusing on something specific in your practice will help you focus and deliver a better quality of the evaluative comments.

Remember that whilst it is important that Members of your enhancements circle or all practising at the same level of study, in other words, all undergraduate, postgraduate, pre-university level. The discipline is significantly less important in terms of identifying effective practice. I hope to hear back from some of you as to how effective this approach might be for you in your context.

 

I thought I would share some cross-platform videos which reflect whatever is on my mind professionally each morning.  Shot in portrait for IGTV and then annotated for YouTube,  they represent unscripted notes on some aspect of learning design or educational enhancement.

This one explores the value of an individual approach to personalizing reflection for academic practitioners. I urge faculty to make reflective journal notes immediately following any teaching event. This is invaluable, as is watching and listening back to your work. Combined with an SGID or in-class evaluation process you will find that any end-of-module evaluation of your teaching effectiveness should hold no surprises.

There is a wealth of literature that describes the purposes of providing feedback as part of the learning process in higher and professional education. I’m going to distil this voluminous research and scholarship into four key purposes.

Firstly, feedback for student learning is about increasing capacity for future actions. Indicating to the student how a piece of work, in-class contribution or whatever form of evidence, is provided by the students could be made better next time is about increasing their capacity. It’s human nature to think, “ok, well that’s task is completed, I passed, let's just move on”, but understanding how to do better, even in an imaginary ‘next time’, builds capacity.

This relates to the second purpose, developing self-awareness or metacognition in the student. Giving students the sense that even if there isn’t going to be another opportunity to provide evidence of learning in the same way again, there will be similar activities, tests, trails or exams and what they learn from the current feedback can be transferred into this new context.

Which leads us on to the third purpose of feedback, of developing academic skills. Poorly designed assessment might just be testing content knowledge, and it's very hard to provide meaningful feedback on such assessment. If on the other hand your assessment is well constructed, against distinct learning outcomes and using a meaningful marking rubric, then the feedback students receive should also be developing the academic abilities and skills beyond what they can recall.

The fourth and final purpose for providing feedback for student learning is to enhance the self-confidence and well-being of the student. Whether your feedback is providing confirmation of progress and success on the part of the student or providing supportive corrective guidance to a struggling student, the purpose remains the same, to bolster a positive attitude to learning, to the subject, to the practices associated with the discipline.

If you are struggling to meet these four core purposes in providing feedback to your students, you may want to think about reading a practical guidebook on providing feedback, enrolling on a professional development programme or just get together with your colleagues, and go through a course redesign or re-evaluation. You could invite a consultant to review your practices. You may find that your assessments and your in-class learning and teaching activities could be better designed to make providing meaningful feedback easier for you, and more useful for your students.

Simon Paul Atkinson
www.sijen.com
Consultancy for International Higher Education

There are social conventions, unwritten rules, around feedback in a formal education setting. Most students associate feedback as coming from the voice of authority in the form of red marks on a written script! It is important to redefine feedback for university and professional learners.

In this short overview video (3'30") Simon outlines four 'contractual' arrangements all faculty should establish at the outset of their course or module with respect to feedback for learning.

These are
1) ensuring that students know WHERE feedback is coming from
2) WHEN to expect feedback
3) WHAT you mean by feedback
4) WHAT to DO with the feedback when it's received.

  1. Feedback is undoubtedly expected from the tutor or instructor but there are numerous feedback channels available to students if only they are conscious of them. These include feedback from their peers but most important from self-assessment and learning activities designed in class.
  2. Knowing where feedback is coming from as part of the learning process relieves the pressure on the tutor and in effect makes feedback a constant 'loop', knowing what to look out for and possibly having students document the feedback they receive supports their metacognitive development.
  3. Being clear with students as to what you regard as feedback is an effective way of ensuring that students take ownership of their own learning. My own personal definition is extremely broad, from the feedback one receives in terms of follow-up comments for anything shared in an online environment to the nods and vocal agreement shared in class to things you say. These are all feedback. Knowing that also encourages participation!
  4. Suggesting to students what they do with feedback will depend a little bit on the nature of the course and the formal assessment processes. Students naturally enough don't do things for the sake of it so it has to be of discernable benefit to them. If there is some form of portfolio based coursework assessment you could ask for an annotated 'diary' on feedback received through the course. If its a course with strong professional interpersonal outcomes (like nursing or teaching for example) you might ask students to identify their favourite and least favourite piece of feedback they experienced during the course, with a commentary on how it affected their subsequent actions.

What's important is to recognise that there are social conventions around feedback in a formal education setting, normally associated with red marks on a written script! It is important to redefine feedback for university and professional learners.

Simon Paul Atkinson (PFHEA)
https://www.sijen.com
SIJEN: Consultancy for International Higher Education

In response to a question from a client, I put together this short video outlining four types of assessment used in higher education, formative, summative, ipsative and synoptic. It's produced as an interactive H5P video. Please feel free to link to this short video (under 5 mins) as a resource if you think your students would find it of use.

Books links:

Book cover pokorny_warren_2016https://amzn.to/2INGIgq

Pokorny, H., & Warren, D. (Eds.). (2016). Enhancing Teaching Practice in Higher Education. SAGE Publications Ltd.

Book Cover for Irons 2007
https://amzn.to/2INh4sq

Irons, A. (2007). Enhancing Learning through Formative Assessment and Feedback (New edition). Routledge.

 

 

 

Book Cover for Hauhart 2014https://amzn.to/2IKdzD3

Hauhart, R. C. (2014). Designing and Teaching Undergraduate Capstone Courses (1 edition). San Francisco: Jossey-Bass.

 

 

 

Book Cover for Boud 2018https://amzn.to/2sgnTaz

Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work (1 edition). Abingdon, UK: Routledge.

5

The majority of academic staff in the United Kingdom will have come across the UKPSF in one form or another. It has been a benchmark for my academic development practice for fourteen years. The United Kingdom Professional Standards Framework is a set of statements, arguably objectives, for the 'complete' skill profile for an academic working in tertiary education. Divided into three areas, core knowledge, professional values and areas of activity, there is some potential overlap but it remains sufficiently broad to reflect the reality at the chalk-face (or PowerPoint screen). It has proved itself to be largely unopposed in the UK context  (certainly there are few rivals) and despite some tweaking of the original 2004 version in 2011, unchanged.

The stability and endurance of the framework is a tribute to its authors, with contributions drawn from across the tertiary sector. The homogeneous nature of the inputs does give us a framework that sometimes feels like a United Nations Security Council resolution, written in diplomatic English, designed not to offend and to be  'universal', in other words euro-centric. Therein lies the difficulty.

As the Aotearoa New Zealand academic community has struggled to adopt and adapt the UKPSF to their unique post-colonial context, they have faced a challenge. In Aotearoa, the Treaty of Waitangi is enshrined in much of public policy and practice. An acknowledgement of the values ascribed to indigenous Māori perspectives, the Treaty is a touchstone for any professional practice framework. For this reason, Ako Aotearoa (NZ's professional academic body equivalent to AdvanceHE, the inheritor of the HEA's remit) has been working towards a revised version of the UKPSF. Incorporating a range of Māori cultural and philosophical perspectives, kaupapa māori, including philosophical doctrines, indigenous knowledge, skills, attitudes and values, is an ongoing challenge. So far, I am aware only of one iteration of an NZ revised PSF operated by Auckland University of Technology, AUT, under the name of Ako Aronui (http://cflat.aut.ac.nz/ako-aronui/). Having been denied the opportunity to modify the original UKPSF (to ensure recognition process remained intact), the team at AUT have appended a Māori perspective to each element in the framework (Buissink et al., 2017). At face value, this could appear to be a mere translation, but it is much more than that. It could be seen as a cultural reinterpretation of each concept or notion. It falls short of a reappraisal of the fundamental indigenous approaches to learning, but it appears respectful and well-considered.

Australian colleagues have taken a somewhat different approach, drafting a 'University Teaching Criteria and Standards Framework' that directly linking roles and promotional structures to values and attributes within their framework. Australian colleagues claimed only to have used the UKPSF as a reference source rather than as a template. In the absence of an embedded or enshrined single treaty arrangement with the heterogeneous Aboriginal peoples of Australia, there is significantly less widespread inter-cultural reverence for different perspectives on learning. (http://uniteachingcriteria.edu.au/)

As a diverse, and somewhat eclectic, sector, the Canadian tertiary sector does not have a single professional framework for educators to aspire to. This is a country in which quality assurance is largely the responsibility of the Provinces, and there is no central national oversight, so this it is hardly surprising. Nonetheless, there are positive moves towards a recognition of the inherent values embedded in indigenous customs and practice with regards to learning, in a document produced by Universities Canada in 2015, entitled "Principles on Indigenous education".

What the Aotearoa and Canadian examples share, and are absent from both the Australian and UK contexts, is an explicit desire not only to be inclusive and make liberal use of words such as access and equality (shared by all) but also to advocate for the indigenization, as well as the internationalization, of the learning experience. I would argue this is a serious omission from the UKPSF. It is absent from any derivation that does not, or is not permitted, to alter the original. There needs to be, I suggest, an acknowledgement of the unique cultural context in which any framework is drafted and explicit recognition of the philosophical and socio-cultural values that are embedded within it.

In the context of the UKPSF, this could be remedied by an additional statement in each category of elements; I'd make them top of the list, or number '0'.

Core Knowledge (K0) The cultural context in which knowledge is created and valued within their discipline.
Professional Values (P0) Recognise different epistemological frameworks and perspectives on learning and disciplinary knowledge.
Areas of Activity (A0) Embrace indigenous perspectives in all aspects of the educational practice.

That's what's missing. The challenge from an Anglo-European-American (post-enlightenment, Judeo-Christian, rationalist) perspective is to acknowledge that there is 'another' way of experiencing and learning-in and -about the world.

......................

Buissink, N., Diamond, P., Hallas, J., Swann, J., & Sciascia, A. D. (2017). Challenging a measured university from an indigenous perspective: placing ‘manaaki’ at the heart of our professional development programme. Higher Education Research & Development, 36(3), 569–582. https://doi.org/10.1080/07294360.2017.1288706

Some recent work with programme designers in other UK institutions suggests to me that quality assurance and enhancement measures continue to be appended to the policies and practices carried out in UK HEIs rather than seeing a revitalising redesign of the entire design and approval process.

This is a shame because it has produced a great deal of work for faculty in designing and administering programmes and modules, not least when it comes to assessment. Whatever you feel about intended learning outcomes (ILOs) and their constraints or structural purpose, there is nearly universal agreement that the purpose of assessment is not to assess students 'knowledge of the content' on a module. Rather the intention of assessment is to demonstrate higher learning skills, most commonly codified in the intended learning outcomes. I have written elsewhere about the paucity of writing effective ILOs and focusing them almost entirely the cognitive domain (intellectual skills), with the omission of other skill domains notably the effective (professional skills) and the psychomotor (transferable skills). Here I want to identify the need for close proximity between ILOs and assessment criteria.

It seems to me that well-designed intended learning outcomes lead to cogent assessment design. They also suggest that the use of a transparent marking rubric, used by both markers and students, creates a simpler process.

To illustrate this I wanted to share two alternative approaches to aligning assessment to the outcomes of a specific module. In order to preserve the confidentiality of the module in question some elements have been omitted but hopefully the point will still be clearly made.

Complex Attempt to Assessment Alignment

Complex Assessment AlignmentI have experienced this process in several Universities.

  1. Intended Learning Outcomes are written (normally at the end of the 'design' process)
  2. ILOs are mapped to different categorizations of domains, Knowledge & Understanding, Intellectual Skills, Professional Skills and Attitudes, Transferable Skills.
  3. ILOs are mapped against assessments, sometimes even mapped to subject topics or weeks.
  4. Students get first sight of the assessment.
  5. Assessment Criteria are written for students using different categories of judgement: Organisation, Implementation, Analysis, Application, Structure, Referencing, etc.
  6. Assessment Marking Schemes are then written for assessors. Often with guidance as to what might be expected at specific threshold stages in the marking scheme.
  7. General Grading Criteria are then developed to map the schemes outcomes back to the ILOs.

 

Streamlined version of aligned assessment

streamlined marking rubric

I realise that this proposed structure is not suitable for all contexts, all educational levels and all disciplines. Nonetheless I would advocate that this is the optimal approach.

  1. ILO are written using a clear delineation of domains; Knowledge, Cognitive (Intellectual), Affective (Values), Psychomotor (Skills) and Interpersonal. These use appropriate verb structures tied directly to appropriate levels. This process is explained in this earlier post.
  2. A comprehensive marking rubric is then shared with both students and assessors. It identifies all of the ILOs that are being assessed. In principle we should only be assessing the ILOs in UK Higher Education NOT content. The rubric will differentiate the type of responses expected to achieve varies grading level.
    • There is an option to automatically sum grades given against specific outcomes or to take a more holistic view.
    • It is possible to weight specific ILOs as being worth more marks than others.
    • This approach works for portfolio assessment but also for a model of assessment where there are perhaps two or three separate pieces of assessment assuming each piece is linked to two or three ILOs.
    • Feedback is given against each ILO on the same rubric (I use Excel workbooks)

I would suggest that it makes sense to use this streamlined process even if it means rewriting your existing ILOs. I'd be happy to engage in debate with anyone about how best to use the streamlined process in their context.

%d bloggers like this: