Skip to content

Let’s talk about the skills required of learning designers, or instructional designers. 

Context makes all the difference. Learning design in a face-to-face University context looks very different from an online instructional designer working in a government department or commercial enterprise.

Roles using generic job titles can differ significantly. There are learning designers who guide academics in their practice (in the way ‘educational developers’ do), and others who interpret how-to notes into a short visually rich interactive screen based experience (more like a UX ‘user experience’ designer). And all points in between.

Job descriptions can be fairly meaningless.

Knowing the needs of the organisation is the best place to start. Knowing the difference between designing a series of courses as part of a University programme that is going to amount to 3,600 hours of student learning differs greatly from taking a manual and putting it into an e-learning unit that takes an hour to work through.

The nature of the organisation also determines the degree of autonomy and responsibility the designer is likely to be given. Turning a manual into e-learning may require no content knowledge at all. Just convert what’s there and you’re good. A course as part of a formal qualification either requires the designer to have some foundation in the discipline or the ability to research, corroborate, validate and extract knowledge,  and establish how best to ‘teach’ that. 

The only commonality across these roles and contexts is the ability to see things through learner’s eyes, whoever that learner is. 

That means empathy is the first key skill.

In the contexts in which I have worked in the last 25 years, the ability to overcome the ‘Curse of Knowledge’, the inability to remember what it means to be a beginner in any area of learning has been key. That means that for me, it has never been about building a team of discipline specialists. It has meant looking to build course teams that include those who possess knowledge and practical experience, and those who act as the ‘first learners’. These first learners, as designers, need to ask the simple questions, the ‘dumb’ questions, to make sure that the level at which we pitch the learning is appropriate.

This may seem obvious to you, but it’s remarkable how many designers are intimidated by specialist knowledge. Faced with a Subject Matter Expert (SME) who is 'cursed with knowledge' and who cannot express learning intentions at the appropriate level, a good designer has to cajole, persuade and chorale the learning from the SME.

This means that the ability to listen and ask questions as though a 'first learner' is the second key skill.

Designing learning that works within a specific context, say a three hour face-to-face workshop, is unlikely to work in an online form without modification. This means designers need to combine their skills of empathy and listening, of understanding the institutional purpose and the perceptions of the learner, and adapt courseware accordingly.

In the last 18 months many organisations have been forced to learn this lesson the hard way. Faced with the challenge of sustaining learning under pandemic conditions, most have made a reasonable effort of getting it right. Those that held to their core values and listened to the needs of their students and teachers have done better than those that reached for process and systems driven approaches.

A good classroom teacher, with practice, can adapt their delivery from workshop to seminar, from lecture to discussion fora, when timetabling assigns them a different teaching space, learning designers need to adapt the ‘tools’ they use to suit the learning need. Digital tools come and go, upgrades can change the way tools behave significantly. A designer who is an expert at using Rise 360 may move into a role where that tool is not available, or they may use H5P like a pro only to find that their organisation prohibits its use on their platform. A good designer looks past the tool (or space) and can identify the essence of the learning experience and make it engaging.

Being adaptable to the means of communication and associated toolset is the third key skill.

You notice that there is nothing about intellectual skills or the ability to use any particular tool. I am making an assumption that you have at least a bare minimum of digital-literacy, that you have used more than one tool, and that you know what appropriate use looks like in a given context. I am also making the assumption that you are intellectually capable of some level of judgement and analysis. 

Most importantly, I am going to assume that you are, because you have read to the end of this post, sufficiently self-reflective to consider what your skill set is, and what it should or could be. That’s a great start. 

Being a reflective practitioner is the fourth key skill. Arguably, the most important one!

If you are thinking about building a career as a learning designer, of whichever guise, these are the four key foundational skills: being empathic, a listener, adaptable, and reflective.

 

 

Photo by Halacious on Unsplash

There are social conventions, unwritten rules, around feedback in a formal education setting. Most students associate feedback as coming from the voice of authority in the form of red marks on a written script! It is important to redefine feedback for university and professional learners.

In this short overview video (3'30") Simon outlines four 'contractual' arrangements all faculty should establish at the outset of their course or module with respect to feedback for learning.

These are
1) ensuring that students know WHERE feedback is coming from
2) WHEN to expect feedback
3) WHAT you mean by feedback
4) WHAT to DO with the feedback when it's received.

  1. Feedback is undoubtedly expected from the tutor or instructor but there are numerous feedback channels available to students if only they are conscious of them. These include feedback from their peers but most important from self-assessment and learning activities designed in class.
  2. Knowing where feedback is coming from as part of the learning process relieves the pressure on the tutor and in effect makes feedback a constant 'loop', knowing what to look out for and possibly having students document the feedback they receive supports their metacognitive development.
  3. Being clear with students as to what you regard as feedback is an effective way of ensuring that students take ownership of their own learning. My own personal definition is extremely broad, from the feedback one receives in terms of follow-up comments for anything shared in an online environment to the nods and vocal agreement shared in class to things you say. These are all feedback. Knowing that also encourages participation!
  4. Suggesting to students what they do with feedback will depend a little bit on the nature of the course and the formal assessment processes. Students naturally enough don't do things for the sake of it so it has to be of discernable benefit to them. If there is some form of portfolio based coursework assessment you could ask for an annotated 'diary' on feedback received through the course. If its a course with strong professional interpersonal outcomes (like nursing or teaching for example) you might ask students to identify their favourite and least favourite piece of feedback they experienced during the course, with a commentary on how it affected their subsequent actions.

What's important is to recognise that there are social conventions around feedback in a formal education setting, normally associated with red marks on a written script! It is important to redefine feedback for university and professional learners.

Simon Paul Atkinson (PFHEA)
https://www.sijen.com
SIJEN: Consultancy for International Higher Education

1

I have no idea what the protocol is for naming versions of things. I imagine, like me, someone has an idea of what the stages are going to look like, when a truly fresh new is going to happen. For me I have a sense that version 4.0 of the SOLE Toolkit will incorporate what I am currently learning about assessment and 'badges', self-certification and team marking. But for now I'm not there yet and am building on what I have learnt about student digital literacies so I will settle for Version 3.5.

This version of the SOLE Toolkit 3.5.1, remains a completely free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. In version 3.0 I added more opportunities for the student to use the toolkit as an advanced organiser offering ways to record their engagement with their learning. It also added in some ability to sequence learning so that students could plan better their learning although I maintained this was guidance only and should allow students to determine their own pathways for learning.

Version 3.5 has two significant enhancements. Firstly, it introduces a new dimension, providing a rich visualization of the learning spaces and tools that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.

Secondly, it allows for a greater degree of estimation of staff workload, part of the original purpose of the SOLE Model and Toolkit project back in 2009. This faculty-time calculations in design and facilitating are based on the learning spaces and tools to be used. This function allows programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.

I invite you to explore the SOLE Toolkit on the dedicated website for the project and would welcome any comments of feedback you might have.

As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements. Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used

As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements.

Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.

Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used there is now a function to allow programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.

This builds on newly designed functionality release in September 2014 in version 3 of the toolkit, namely;

  • Predicated Workload – the amount of time the designer anticipates students will spend is on activities charted.
  • Sequencing activities – the ability to suggest the order in which activities should be tackled. It remains an open approach and so the numbering system (letters, Roman, multiple instances of the same item) is open. It is considered important in the SOLE Model that students should take responsibility for the learning process as so the sequence should  be suggestive or advised.
  • Completion Record – a column has been added to allow students to record whether an activity has been completed alongside indicating the amount of time was actually spent on any given activity.
  • Objectives Met Record – an area is included to allow students to indicate that they believe they have met the objectives for each individual topic/week.

At its core the toolkit serves to implement a model of learning based on the SOLE Model itself and it is worth reminding yourself how the model is designed to work.

Further Details:

Here are two short videos that detail the significant enhancement made in Version 3.5 of the Tookit.

Visualisation of Learning spaces

Calculating Faculty-Time in Design and Facilitation

Back in the late northern hemisphere summer of 2013 I drafted a background paper on the differences between Educational Data Mining, Academic Analytics and Learning Analytics. Entitled 'Adaptive Learning and Learning Analytics: a new design paradigm', It was intended to 'get everyone on the same page' as many people at my University, from very different roles, responsibilities and perspectives, had something to say about 'analytics'. Unfortunately for me I then had nearly a years absence through ill-health and I came back to an equally obfuscated landscape of debate and deliberation. So I opted to finish the paper.

I don't claim to be an expert on learning analytics, but I do know something about learning design, about teaching on-line and about adapting learning delivery and contexts to suit different individual needs. The paper outlines some of the social implications of big data collection. It looks to find useful definitions for the various fields of enquiry concerned with collecting and making something useful with learner data to enrich the learning process. It then suggest some of the challenges that such data collection involves (decontextualisation and privacy) and the opportunity it represents (self-directed learning and the SOLE Model). Finally it explores the impact of learning analytics on learning design and suggests why we need to re-examine the granularity of our learning designs.

I conclude;

Learning Analytics Cover"The influences on the learner that lay beyond the control of the learning provider, employer or indeed the individual themselves, are extremely diverse. Behaviours in social media may not be reflected in work contexts, and patterns of learning in one discipline or field of experience may not be effective in another. The only possible solution to the fragmentation and intricacy of our identities is to have more, and more interconnected, data and that poses a significant problem.

Privacy issues are likely to provide a natural break on the innovation of learning analytics. Individuals may not feel that there is sufficient value to them personally to reveal significant information about themselves to data collectors outside the immediate learning experience and that information may simply be inadequate to make effective adaptive decisions. Indeed, the value of the personal data associated with the learning analytics platforms emerging may soon see a two tier pricing arrangement whereby a student pays a lower fee if they engage fully in the data gathering process, providing the learning provider with social and personal data, as well as their learning activity, and higher fees for those that wish to opt-out of the ‘data immersion’.

However sophisticated the learning analytics platforms, algorithms and user interfaces become in the next few years, it is the fundamentals of the learning design process which will ensure that learning providers do not need to ‘re-tool’ every 12 months as technology advances and that the optimum benefit for the learner is achieved. Much of the current commercial effort, informed by ‘big data’ and ‘every-click-counts’ models of Internet application development, is largely devoid of any educational understanding. There are rich veins of academic traditional and practice in anthropology, sociology and psychology, in particular, that can usefully inform enquiries into discourse analysis, social network analysis, motivation, empathy and sentiment study, predictive modelling and visualisation and engagement and adaptive uses of semantic content (Siemens, 2012). It is the scholarship and research informed learning design itself, grounded in meaningful pedagogical and andragogical theories of learning that will ensure that technology solutions deliver significant and sustainable benefits.

To consciously misparaphrase American satirist Tom Lehrer, learning analytics and adaptive learning platforms are “like sewers, you only get out of them, what you put into them’."

Download the paper here, at AcademiaEdu or ResearchGate

Siemens, G. (2012). Learning analytics: envisioning a research discipline and a domain of practice. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (pp. 4–8). New York, NY, USA: ACM. doi:10.1145/2330601.2330605

3

Sharing a paper today on the visualisation of educational taxonomies. I have finally got around to putting into a paper some of the blog postings, discussion, tweets and ruminations of recent years on educational taxonomies. I am always struck in talking to US educators (and faculty training teachers in particular) of the very direct use made of Bloom's original 1956 educational taxonomy for the cognitive domain. They seem oblivious however to other work that might sit
(conceptually) alongside Bloom is a way to support their practice.

http://www.academia.edu/4289077/Taxonomy_Circles_Visualizing_the_possibilities_of_intended_learning_outcomes

In New Zealand, whilst at Massey I got into some fascinating discussions with education staff about the blurring of the affective and cognitive domains, significant in cross-cultural education, and this led me to look for effective representations of domains. I came across an unattributed circular representation that made instant sense to me, and set about mapping other domains in the same way. In the process I found not only a tool that supported and reinforced the conceptual framework represented by Constructive Alignment, but also a visualising that supported engagement with educational technologies and assessment tools. I hope this brief account is of use to people and am, as always, very open to feedback and comment.

I'm very grateful to those colleagues across the globe who have expressed interest in using these visual representations and hope to be able to share some applicable data with everyone in due course.

10

[See Updated Pages for Educational Taxonomies]

Circular representations of educational taxonomies
Four 'Domains' of educational objectives represented in a circular form

I think being able to visualise things is important. Faculty and learning designers need to be able to see Intended Learning Outcomes (ILOs) take shape and mant find existing lists are uninspiring. It’s not uncommon for faculty and instructional designers to get tired and weary of ILOs; they can feel restrictive, repetitive, formulaic and sometimes obstructive. In previous posts I’ve tried to suggest that the bigger picture, the challenges of effective 21st century university level learning design, make them not only useful, but also essential. If you don't agree, don’t bother reading. I’m not going to try and persuade you. If you think there’s some truth in the argument and you want to engage with ILOs to make your teaching more focussed, your students increasingly autonomous and your graduates equipped with meaningful evidence, then I hope I have something worthwhile sharing and will welcome your thoughts.

My argument is that a module (a substantial unit of a full years undergraduate study), and the programme of which is part, should have clearly articulated outcomes in four domains:

  • Knowledge and understanding – or the knowledge domain
  • Intellectual Skills – or the cognitive domain
  • Professional Skills – or the affective domain
  • Transferable Skills – or the psychomotor domain

I’m suggesting one SHOULD expect to see a different distribution of ILOs between the outcomes in these domains depending on the focus of the module and the level of study. One might expect to see a second year anthropology module on ‘theoretical perspectives’ emphasising cognitive outcomes and a module being studied alongside it on ‘research design and techniques’ emphasising affective and psychomotor outcomes. One might reasonably expect to see more foundational ‘knowledge and understanding’ outcomes in the first year of a programme of study, and more ‘cognitive’ outcomes at the end of the programme. The lack of this 'designed articulation' in many modules undermines their value to the student and ultimately to faculty.

The basic principle is that an outcome should be assessable. Lots of great stuff can happen in your teaching and students’ learning that DOESN’T need to be assessed. It can be articulated in the syllabus, it just isn't a measured outcome. A student should be able, at the end of this course of study (module or programme), to evidence that they have attained the intended learning outcomes. This evidence has been assessed in some way and the student is then able to point to the ILOs amassed throughout their programme and say “I can demonstrate that I learnt to DO this”.

Representing Taxonomies

There has been a significant shift in the language we now use from the original work in the 1950s by Bloom and colleagues. The passively descriptive language of Bloom's Taxonomy has become the active language of Anderson and Krathwohl (Anderson & Krathwohl, 2001). The taxonomies have moved from Evaluation to Evaluate, from Analysis to Analyse. This is significant in that the emphasis has moved away from describing what the focus of the teaching is supposed to be, to the demonstrable outcomes of the learning.

The illustration above consists of four visual ‘wheels’ that I have used to discuss learning outcomes with faculty in the context of module and programme design at Massey University in New Zealand and at the LSE and BPP University College in the United Kingdom. These visual representations were inspired by work done elsewhere, on the cognitive domain in particular. The first documented example of this circular representation I have been able to find is attributed to Barbara Clark in 2002, but a great many people have since represented Bloom’s original, and the revised, cognitive domain in this way.

The circular representation has the higher level terms at the centre, proto-verbs if you will, surrounded by a series of active verbs that articulate actions an individual might undertake to generate evidence, of their ability to represent to proto-verb. The circular visualisation also serves to create a more fluid representation of the stages, or divisions, in the proto-verbs. Rather than a strict ‘step-by-step’ representation where one advances ‘up’ the proto-verbs, one might consider this almost like the dial on an old telephone, in every case one starts at the ‘foundational’ and dials-up though the stages to the ‘highest’ level. Each level relies on the previous. It may be implicit that to analyse something, one will already have acquired a sense of its application, and that application is grounded on subject knowledge and understanding. So the circle is a useful way of visualising the interconnected nature of the process. Most importantly in my practice, it’s a great catalyst for debate.

The circular representations of the domains and associated taxonomies also serve to make learning designers aware of the language they use. Can a verb be used at different levels? Certainly. Why? Because context is everything. One might ‘identify’ different rock samples in a first year geology class as part of applying a given classification of rocks to samples, or one might identify a new species of insect as part of postgraduate research programme. The verb on its own does not always denote level. I talk about the structure of ILOs in a subsequent post.

Circular representation of Educational Taxonomies
Structure of the circular representations of Educational Taxonomies

More recent representations have created new complex forms that include the outer circle illustrated here. I’ve found these rather useful, in part because they often prove contentious. If the inner circle represents (in my versions) the proto-verbs within our chosen taxonomies, and the next circle represent that active verbs used to describe the Intended Learning Outcomes (ILO) AND the Learning and Teaching Activities (TLS), the outermost circle represents the evidence and assessment forms used to demonstrate that verb. Increasingly I’ve used this to identify educational technologies and get faculty thinking more broadly about how they can assess things online as well as in more traditional settings. The outermost circle will continue to evolve as our use of educational technologies evolves. In Constructive Alignment one might reasonably expect students’ learning activity to ‘rehearse’ the skills they are ultimately to evidence in assessment (Biggs & Collis, 1982; Boud & Falchikov, 2006) and the forms to enable that are becoming increasingly varied.

Re-visioning  Taxonomies

One of my favourite representations of the relationship between the knowledge dimension and the cognitive domain is from Rex Heer at Iowa State University’s Center for Excellence in Learning and Teaching (http://www.celt.iastate.edu/teaching/RevisedBlooms1.html accessed ). It’s an interactive model that articulates the relationship, as Anderson and Krathwohl saw it, rather well. My own interest, as we look to effective ILOs, is to separate out the knowledge dimension as a subject or knowledge domain and have faculty articulate this clearly for students, before reconnecting to the other domains. A process I’ll talk about subsequently.

Here are my four ‘working circles’ using adaptations of taxonomies from Anderson and Krathwohl (Knowledge and Understanding, and Cognitive), Krathwohl et al (Affective) and Dave (Psychomotor). I have adapted the Knowledge Dimension of Anderson and Krathwohl to do two things; to describe the dimension in terms of active verbs rather than as a definition of the nature of the knowledge itself, and I have incorporated a stage I believe is under represented in their articulation. I have added the ability to ‘ contextualise’ subject knowledge between the ability to specify it (Factual) and the ability to conceptualize (Conceptual). I have also rearticulated the original ‘Metacognitive’ as the ability to 'Abstract'. This will doubtless need further work. My intent is not to dismiss the valuable work already in evidence around the relationship between a knowledge dimension and the cognitive domain, rather it is to enable faculty, specifically when writing learning outcomes, to identify the subject, discipline or knowledge to be enabled in more meaningful ways.

These images are provided as JPG images. If you would like me to email the original PowerPoint slides (very low-tech!) so that you can edit, amend and enhance, I am happy to do so. I only ask that you enhance my practice by sharing your results with me.

I hope these provoke thought, reflection and comment. Feel free to use them with colleagues in discussion and let me know if there are enhancements you think would make them more useful to others.

Cognitive Domain - Intellectual Skills

Cognitive Domain - Intellectual Skills

Affective Domain - Professional and Personal Skills

Affective Domain - Professional and Personal Skills
Affective Domain - Professional and Personal Skills

Psychomotor Domain- Practical, Technical and Transferable Skills

Psychomotor Domain- Practical, Technical and Transferable Skills
Psychomotor Domain- Practical, Technical and Transferable Skills

Knowledge Domain - Subject and Discipline Knowledge

Knowledge Domain- Subject or Discipline Skills
Knowledge Domain- Subject or Discipline Skills

The next post will illustrate the usefulness of these visualisations in drafting Intended Learning Outcomes with some examples.

...................................................................................................

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing : a revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Biggs, J. B., & Collis, K. F. (1982). Evaluating the Quality of Learning: Structure of the Observed Learning Outcome Taxonomy. Academic Press Inc.

Boud, D., & Falchikov, N. (2006). Aligning assessment with long‐term learning. Assessment & Evaluation in Higher Education, 31(4), 399–413.

.....................................................................................................

Edited October 19th 2012 in response to feedback.

Can one know too much about the learning we design? Why is it we appear to know so little? It's hard to share what you can't articulate. This is an attempt to make the learning expectations, aspirations and intentions we have of learners as transparent as possible. The desire to produce a useable, intuitive (or at least helpful) toolkit to implement the SOLE model of learning design has seen several small incremental updates in 2011.

Version 2.3 of the SOLE toolkit is released today 5th September and introduces a 'modes of engagement' schematic to a new 'dashboard' sheet within the toolkit workbook. The toolkit remains a standard Microsoft Excel workbook, without macros or protected cells that any user can customise and adapt.

Download the toolkit and explore.

SOLE and DiAL-e Sites August 2011

The  27th Annual Distance Learning and Teaching Conference at Madison-Wisconsin this August was a diverse and varied programme attended by some 900 distance educators from all sectors, from K-12 to professional education. My contribution was a half-day  DiAL-e Workshop with Kevin Burden (University of Hull) attended by some 24 people. The workshop went relatively well but also gave us an insight into a variety of cultural differences in such settings. I was able to learn from this and the 45 Minute Information Session on the SOLE model the following day definitely had a better 'buzz'. In addition I contributed to a new format this year, a 5+10 Videoshare session where participants had (supposedly!) produced a 5 minute video and then made themselves available to discuss it for 10 minutes.

All the sessions went well but the SOLE model and toolkit seemed to grab some serious interest and I will hope to have the opportunity to go back to the States and work with colleagues on learning design projects in the future.

SOLE Model Poster
ALDinHE Poster - SOLE Model

This years ALDinHE conference had as its theme - “Engaging Students - Engaging Learning” and was a series of small, diverse but very practical sessions ranging from identifying successful work-based learning models to the effective induction of non-traditional learners. In amongst all of that I ran a small workshop on Wednesday 20th April using a single webpage on the wordpress site for the DiAL-e Project. (You are welcome to access the workshop resources if you are interested in the DiAL-e)

I had two posters at the conference, a solo effort with the SOLE model and a joint effort with Kevin Burden from the University of Hull featuring the DiAL-e framework work we have been doing since 2006. There was an excellent response to the SOLE poster and considerable interest in its potential use as a staff development stimulus. I was particularly keen to suggest it form a useful tool for course team development in the broader context of course design, but every conversation helps me refine my own ideas, which is after all why we go to these conferences!

The ALDinHE Poster (as a PDF: SOLE Poster 2011-1 FINAL)

%d bloggers like this: