There are many courses out there that do a great job of teaching manual, dexterity and physical capabilities. From bricklaying, hairdressing, to gas-fitting, there are course that are focussed around manual processes. However, there are huge numbers of graduates from tertiary programmes that cannot perform duties required of employers on day-one simply because they have not learnt how to do something. Their learning may have been told ‘why’, and even ‘what’ is expected, but it has not enabled them to perfect the skills associated with the ‘how’.
It remains remarkable to me that so many course and programme specification documents, replete with (sometimes well-formed) learning outcomes, have NO psychomotor outcomes. There are few courses that could not be improved by including an assessed outcome associated with using a tool or technology.
To prove the point I asked colleagues informally before Christmas whether they could think of a course where there was NO tool or technology use in play. Without further prompting, most agreed that Excel skills, SPSS, CAD tools, even library databases all required a degree of incremental competence but that these had not been in any way ‘taught’, let alone assessed, within their courses. One provocateur suggested that their course required only the ability to write and reflect. It took little effort to unpick this given that writing in this context requires a word-processing package, formatting, style sheets, spell-checking and in-text-citations, all of which are assumed graduates skills. This colleague stood their ground, suggesting that they were not employed to teach those skills; that was someone else’s responsibility.
This may be at the root of the challenge. Thirty years ago (when many of our current educational leadership graduated) your three to seven years spent at University was a valuable time spent in proximity to the sources of privileged knowledge, the esteemed Professor or the library. You had a whole life after graduation to develop the rounded skills associated with being whatever your chosen lifetime employment might be. That is simply no longer the case. The ‘academy’ no longer contains the privilege knowledge. We have democratised the information sources. Even those who embark on a lifelong vocation will find the landscape around them continuously changing.
Access to the LinkedIn Learning resources, and the cornucopia of free web resources, has allowed some institutions to negate whatever obligations for manual, dexterity and physical skills development they might feel towards their students. Some course weave these external resources into the learner’s experience, others totally abdicate responsibility and deem it part of the independent learning required of learners.
One reason for this lack of attention paid to the acquisition of psychomotor skills is because it is thought harder to assess someone’s psychomotor skill set that it is to test their knowledge, and by extension their intellectual or cognitive skills. If I can’t meaningfully assess it, I’ll just avoid teaching it. It is also a function of the ‘curse of knowledge’, given that faculty have acquired their psychomotor skills in a particular technology or tool over an extended period of time and they have failed to either document that learning or indeed to reflect on it.
There are some well designed courses out there. I hope you designed or teach on one. But there is still a significant deficit in the in-course provision of support for the acquisition of psychomotor skills associated with tools and technologies in a range of disciplines. We need to design courses across ALL disciplines that are rooted in the skills that graduates require to handle the uncertain information, technology, and socio-cultural environments they face. This means designing courses first around psychomotor skills, interpersonal and affective skills, then meta-cognitive and cognitive skills. Then, and only then, should we worry about the factual knowledge element. We need programme and course designers to be designing with different priorities if we want to make learning appropriate for the contemporary learner.
Since then, the boundaries between contexts, technologies and experiences have become even more blurred. Yet rather than discarding the blended terminology, there is simply a profusion of new terms, hybrid and hyflex, being the current vogue. Oh, and ‘flipped’, which is presented to the ill-informed as something new and radical. The problem is these terms are driven by us, as institutions, to define the nature of our course offering, rather than being conceptualised as the learner experiences them.
I am comfortable using the term ‘blend’, alongside ‘mix’, ‘selection’, ‘options’ and many synonyms when talking about courseware designed for a specific delivery context. The context of the learner is key. Any contemporary learner journey is going to involve a ‘smorgasbord’ of learning material, voices to be exposed to, individuals to share reflections with, and physical, social and cultural contexts in which learning is occurring. I can't imagine a context in which a learner only learns through one communication mode, be it a lecture or workshop.
Learning can, and should, be as ‘flexible’ as possible. Informed by the principles of Universal Design for Learning, learning should be malleable enough by the learner to suit their evolving needs and context. Learners should be able to discard elements of the learning journey, take shortcuts rather than revisit existing learning if they choose. Equally, they should be able to explore around the edges of the path designed for them; to go ‘off-piste’ if you like.
Good learning design and good teaching encourage the learner to re-contextualise newly gained knowledge and experience in the light of previous learning. Given that each individual’s context is unique, it is essential that learners should blend their own learning experience. Learners should be enabled to make-meaning for themselves. Good teachers know this.
In practice, the terms blended, hybrid and hyflex, are really being used by institutions to define the nature of their ‘product’, rather than the nature of the learning experience. Institutions choose to package what they sell under different labels, it’s a marketing pitch, “now with added webinars” or “now with extra VLE resources available”. Some senior managers have assumed the opportunities for off-campus communication engagement in the internet era represent a new alternative pedagogy. In reality, the ‘alternative’ pedagogies have always been there. There have always been skilled faculty who reached beyond the lecture or seminar room and engaged learners in their own context. Designing courses that are suitable for open navigation is counter-intuitive for most institutions. The focus has been on designing a learning pathway, not pathways. It’s easier for institutions that way.
What has changed since 2011 is the range of communication technologies available for learners to choose, or not choose, to interact with content, experiences and each other. Courseware in my view can, and should, be designed with open navigation, open pathways, so a learner can choose how they want to arrive at a preconceived set of outcomes. We can provide an optimal route to success for the less adventurous, but choice empowers. Essentially, learners can differentiate their journey from others based on their context and personal needs. Hey, why don’t we use the term ‘differentiated learning’… although that sounds familiar. Wonder if anyone has used that term before? Forgive my sarcasm, but I do wonder whether we need to find new language to describe the aspirations for our courseware as it is experienced by learners.
If we acknowledge that everything is to some extent blended, then what term would encourage courses to be designed to enable learning journeys suitable for personalisation by the learner. Differentiated learning is the best I’ve got.
Let’s talk about the skills required of learning designers, or instructional designers.
Context makes all the difference. Learning design in a face-to-face University context looks very different from an online instructional designer working in a government department or commercial enterprise.
Roles using generic job titles can differ significantly. There are learning designers who guide academics in their practice (in the way ‘educational developers’ do), and others who interpret how-to notes into a short visually rich interactive screen based experience (more like a UX ‘user experience’ designer). And all points in between.
Job descriptions can be fairly meaningless.
Knowing the needs of the organisation is the best place to start. Knowing the difference between designing a series of courses as part of a University programme that is going to amount to 3,600 hours of student learning differs greatly from taking a manual and putting it into an e-learning unit that takes an hour to work through.
The nature of the organisation also determines the degree of autonomy and responsibility the designer is likely to be given. Turning a manual into e-learning may require no content knowledge at all. Just convert what’s there and you’re good. A course as part of a formal qualification either requires the designer to have some foundation in the discipline or the ability to research, corroborate, validate and extract knowledge, and establish how best to ‘teach’ that.
The only commonality across these roles and contexts is the ability to see things through learner’s eyes, whoever that learner is.
That means empathy is the first key skill.
In the contexts in which I have worked in the last 25 years, the ability to overcome the ‘Curse of Knowledge’, the inability to remember what it means to be a beginner in any area of learning has been key. That means that for me, it has never been about building a team of discipline specialists. It has meant looking to build course teams that include those who possess knowledge and practical experience, and those who act as the ‘first learners’. These first learners, as designers, need to ask the simple questions, the ‘dumb’ questions, to make sure that the level at which we pitch the learning is appropriate.
This may seem obvious to you, but it’s remarkable how many designers are intimidated by specialist knowledge. Faced with a Subject Matter Expert (SME) who is 'cursed with knowledge' and who cannot express learning intentions at the appropriate level, a good designer has to cajole, persuade and chorale the learning from the SME.
This means that the ability to listen and ask questions as though a 'first learner' is the second key skill.
Designing learning that works within a specific context, say a three hour face-to-face workshop, is unlikely to work in an online form without modification. This means designers need to combine their skills of empathy and listening, of understanding the institutional purpose and the perceptions of the learner, and adapt courseware accordingly.
In the last 18 months many organisations have been forced to learn this lesson the hard way. Faced with the challenge of sustaining learning under pandemic conditions, most have made a reasonable effort of getting it right. Those that held to their core values and listened to the needs of their students and teachers have done better than those that reached for process and systems driven approaches.
A good classroom teacher, with practice, can adapt their delivery from workshop to seminar, from lecture to discussion fora, when timetabling assigns them a different teaching space, learning designers need to adapt the ‘tools’ they use to suit the learning need. Digital tools come and go, upgrades can change the way tools behave significantly. A designer who is an expert at using Rise 360 may move into a role where that tool is not available, or they may use H5P like a pro only to find that their organisation prohibits its use on their platform. A good designer looks past the tool (or space) and can identify the essence of the learning experience and make it engaging.
Being adaptable to the means of communication and associated toolset is the third key skill.
You notice that there is nothing about intellectual skills or the ability to use any particular tool. I am making an assumption that you have at least a bare minimum of digital-literacy, that you have used more than one tool, and that you know what appropriate use looks like in a given context. I am also making the assumption that you are intellectually capable of some level of judgement and analysis.
Most importantly, I am going to assume that you are, because you have read to the end of this post, sufficiently self-reflective to consider what your skill set is, and what it should or could be. That’s a great start.
There are social conventions, unwritten rules, around feedback in a formal education setting. Most students associate feedback as coming from the voice of authority in the form of red marks on a written script! It is important to redefine feedback for university and professional learners.
In this short overview video (3'30") Simon outlines four 'contractual' arrangements all faculty should establish at the outset of their course or module with respect to feedback for learning.
1) ensuring that students know WHERE feedback is coming from
2) WHEN to expect feedback
3) WHAT you mean by feedback
4) WHAT to DO with the feedback when it's received.
Feedback is undoubtedly expected from the tutor or instructor but there are numerous feedback channels available to students if only they are conscious of them. These include feedback from their peers but most important from self-assessment and learning activities designed in class.
Knowing where feedback is coming from as part of the learning process relieves the pressure on the tutor and in effect makes feedback a constant 'loop', knowing what to look out for and possibly having students document the feedback they receive supports their metacognitive development.
Being clear with students as to what you regard as feedback is an effective way of ensuring that students take ownership of their own learning. My own personal definition is extremely broad, from the feedback one receives in terms of follow-up comments for anything shared in an online environment to the nods and vocal agreement shared in class to things you say. These are all feedback. Knowing that also encourages participation!
Suggesting to students what they do with feedback will depend a little bit on the nature of the course and the formal assessment processes. Students naturally enough don't do things for the sake of it so it has to be of discernable benefit to them. If there is some form of portfolio based coursework assessment you could ask for an annotated 'diary' on feedback received through the course. If its a course with strong professional interpersonal outcomes (like nursing or teaching for example) you might ask students to identify their favourite and least favourite piece of feedback they experienced during the course, with a commentary on how it affected their subsequent actions.
What's important is to recognise that there are social conventions around feedback in a formal education setting, normally associated with red marks on a written script! It is important to redefine feedback for university and professional learners.
I have no idea what the protocol is for naming versions of things. I imagine, like me, someone has an idea of what the stages are going to look like, when a truly fresh new is going to happen. For me I have a sense that version 4.0 of the SOLE Toolkit will incorporate what I am currently learning about assessment and 'badges', self-certification and team marking. But for now I'm not there yet and am building on what I have learnt about student digital literacies so I will settle for Version 3.5.
This version of the SOLE Toolkit 3.5.1, remains a completely free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. In version 3.0 I added more opportunities for the student to use the toolkit as an advanced organiser offering ways to record their engagement with their learning. It also added in some ability to sequence learning so that students could plan better their learning although I maintained this was guidance only and should allow students to determine their own pathways for learning.
Version 3.5 has two significant enhancements. Firstly, it introduces a new dimension, providing a rich visualization of the learning spaces and tools that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.
Secondly, it allows for a greater degree of estimation of staff workload, part of the original purpose of the SOLE Model and Toolkit project back in 2009. This faculty-time calculations in design and facilitating are based on the learning spaces and tools to be used. This function allows programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.
As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements.
Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning.
Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used
As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements.
Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.
Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used there is now a function to allow programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.
This builds on newly designed functionality release in September 2014 in version 3 of the toolkit, namely;
Predicated Workload – the amount of time the designer anticipates students will spend is on activities charted.
Sequencing activities – the ability to suggest the order in which activities should be tackled. It remains an open approach and so the numbering system (letters, Roman, multiple instances of the same item) is open. It is considered important in the SOLE Model that students should take responsibility for the learning process as so the sequence should be suggestive or advised.
Completion Record – a column has been added to allow students to record whether an activity has been completed alongside indicating the amount of time was actually spent on any given activity.
Objectives Met Record – an area is included to allow students to indicate that they believe they have met the objectives for each individual topic/week.
At its core the toolkit serves to implement a model of learning based on the SOLE Model itself and it is worth reminding yourself how the model is designed to work.
Here are two short videos that detail the significant enhancement made in Version 3.5 of the Tookit.
Visualisation of Learning spaces
Calculating Faculty-Time in Design and Facilitation
Back in the late northern hemisphere summer of 2013 I drafted a background paper on the differences between Educational Data Mining, Academic Analytics and Learning Analytics. Entitled 'Adaptive Learning and Learning Analytics: a new design paradigm', It was intended to 'get everyone on the same page' as many people at my University, from very different roles, responsibilities and perspectives, had something to say about 'analytics'. Unfortunately for me I then had nearly a years absence through ill-health and I came back to an equally obfuscated landscape of debate and deliberation. So I opted to finish the paper.
I don't claim to be an expert on learning analytics, but I do know something about learning design, about teaching on-line and about adapting learning delivery and contexts to suit different individual needs. The paper outlines some of the social implications of big data collection. It looks to find useful definitions for the various fields of enquiry concerned with collecting and making something useful with learner data to enrich the learning process. It then suggest some of the challenges that such data collection involves (decontextualisation and privacy) and the opportunity it represents (self-directed learning and the SOLE Model). Finally it explores the impact of learning analytics on learning design and suggests why we need to re-examine the granularity of our learning designs.
"The influences on the learner that lay beyond the control of the learning provider, employer or indeed the individual themselves, are extremely diverse. Behaviours in social media may not be reflected in work contexts, and patterns of learning in one discipline or field of experience may not be effective in another. The only possible solution to the fragmentation and intricacy of our identities is to have more, and more interconnected, data and that poses a significant problem.
Privacy issues are likely to provide a natural break on the innovation of learning analytics. Individuals may not feel that there is sufficient value to them personally to reveal significant information about themselves to data collectors outside the immediate learning experience and that information may simply be inadequate to make effective adaptive decisions. Indeed, the value of the personal data associated with the learning analytics platforms emerging may soon see a two tier pricing arrangement whereby a student pays a lower fee if they engage fully in the data gathering process, providing the learning provider with social and personal data, as well as their learning activity, and higher fees for those that wish to opt-out of the ‘data immersion’.
However sophisticated the learning analytics platforms, algorithms and user interfaces become in the next few years, it is the fundamentals of the learning design process which will ensure that learning providers do not need to ‘re-tool’ every 12 months as technology advances and that the optimum benefit for the learner is achieved. Much of the current commercial effort, informed by ‘big data’ and ‘every-click-counts’ models of Internet application development, is largely devoid of any educational understanding. There are rich veins of academic traditional and practice in anthropology, sociology and psychology, in particular, that can usefully inform enquiries into discourse analysis, social network analysis, motivation, empathy and sentiment study, predictive modelling and visualisation and engagement and adaptive uses of semantic content (Siemens, 2012). It is the scholarship and research informed learning design itself, grounded in meaningful pedagogical and andragogical theories of learning that will ensure that technology solutions deliver significant and sustainable benefits.
To consciously misparaphrase American satirist Tom Lehrer, learning analytics and adaptive learning platforms are “like sewers, you only get out of them, what you put into them’."
Siemens, G. (2012). Learning analytics: envisioning a research discipline and a domain of practice. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (pp. 4–8). New York, NY, USA: ACM. doi:10.1145/2330601.2330605
Sharing a paper today on the visualisation of educational taxonomies. I have finally got around to putting into a paper some of the blog postings, discussion, tweets and ruminations of recent years on educational taxonomies. I am always struck in talking to US educators (and faculty training teachers in particular) of the very direct use made of Bloom's original 1956 educational taxonomy for the cognitive domain. They seem oblivious however to other work that might sit
(conceptually) alongside Bloom is a way to support their practice.
In New Zealand, whilst at Massey I got into some fascinating discussions with education staff about the blurring of the affective and cognitive domains, significant in cross-cultural education, and this led me to look for effective representations of domains. I came across an unattributed circular representation that made instant sense to me, and set about mapping other domains in the same way. In the process I found not only a tool that supported and reinforced the conceptual framework represented by Constructive Alignment, but also a visualising that supported engagement with educational technologies and assessment tools. I hope this brief account is of use to people and am, as always, very open to feedback and comment.
I'm very grateful to those colleagues across the globe who have expressed interest in using these visual representations and hope to be able to share some applicable data with everyone in due course.
I think being able to visualise things is important. Faculty and learning designers need to be able to see Intended Learning Outcomes (ILOs) take shape and mant find existing lists are uninspiring. It’s not uncommon for faculty and instructional designers to get tired and weary of ILOs; they can feel restrictive, repetitive, formulaic and sometimes obstructive. In previous posts I’ve tried to suggest that the bigger picture, the challenges of effective 21st century university level learning design, make them not only useful, but also essential. If you don't agree, don’t bother reading. I’m not going to try and persuade you. If you think there’s some truth in the argument and you want to engage with ILOs to make your teaching more focussed, your students increasingly autonomous and your graduates equipped with meaningful evidence, then I hope I have something worthwhile sharing and will welcome your thoughts.
My argument is that a module (a substantial unit of a full years undergraduate study), and the programme of which is part, should have clearly articulated outcomes in four domains:
Knowledge and understanding – or the knowledge domain
Intellectual Skills – or the cognitive domain
Professional Skills – or the affective domain
Transferable Skills – or the psychomotor domain
I’m suggesting one SHOULD expect to see a different distribution of ILOs between the outcomes in these domains depending on the focus of the module and the level of study. One might expect to see a second year anthropology module on ‘theoretical perspectives’ emphasising cognitive outcomes and a module being studied alongside it on ‘research design and techniques’ emphasising affective and psychomotor outcomes. One might reasonably expect to see more foundational ‘knowledge and understanding’ outcomes in the first year of a programme of study, and more ‘cognitive’ outcomes at the end of the programme. The lack of this 'designed articulation' in many modules undermines their value to the student and ultimately to faculty.
The basic principle is that an outcome should be assessable. Lots of great stuff can happen in your teaching and students’ learning that DOESN’T need to be assessed. It can be articulated in the syllabus, it just isn't a measured outcome. A student should be able, at the end of this course of study (module or programme), to evidence that they have attained the intended learning outcomes. This evidence has been assessed in some way and the student is then able to point to the ILOs amassed throughout their programme and say “I can demonstrate that I learnt to DO this”.
There has been a significant shift in the language we now use from the original work in the 1950s by Bloom and colleagues. The passively descriptive language of Bloom's Taxonomy has become the active language of Anderson and Krathwohl (Anderson & Krathwohl, 2001). The taxonomies have moved from Evaluation to Evaluate, from Analysis to Analyse. This is significant in that the emphasis has moved away from describing what the focus of the teaching is supposed to be, to the demonstrable outcomes of the learning.
The illustration above consists of four visual ‘wheels’ that I have used to discuss learning outcomes with faculty in the context of module and programme design at Massey University in New Zealand and at the LSE and BPP University College in the United Kingdom. These visual representations were inspired by work done elsewhere, on the cognitive domain in particular. The first documented example of this circular representation I have been able to find is attributed to Barbara Clark in 2002, but a great many people have since represented Bloom’s original, and the revised, cognitive domain in this way.
The circular representation has the higher level terms at the centre, proto-verbs if you will, surrounded by a series of active verbs that articulate actions an individual might undertake to generate evidence, of their ability to represent to proto-verb. The circular visualisation also serves to create a more fluid representation of the stages, or divisions, in the proto-verbs. Rather than a strict ‘step-by-step’ representation where one advances ‘up’ the proto-verbs, one might consider this almost like the dial on an old telephone, in every case one starts at the ‘foundational’ and dials-up though the stages to the ‘highest’ level. Each level relies on the previous. It may be implicit that to analyse something, one will already have acquired a sense of its application, and that application is grounded on subject knowledge and understanding. So the circle is a useful way of visualising the interconnected nature of the process. Most importantly in my practice, it’s a great catalyst for debate.
The circular representations of the domains and associated taxonomies also serve to make learning designers aware of the language they use. Can a verb be used at different levels? Certainly. Why? Because context is everything. One might ‘identify’ different rock samples in a first year geology class as part of applying a given classification of rocks to samples, or one might identify a new species of insect as part of postgraduate research programme. The verb on its own does not always denote level. I talk about the structure of ILOs in a subsequent post.
More recent representations have created new complex forms that include the outer circle illustrated here. I’ve found these rather useful, in part because they often prove contentious. If the inner circle represents (in my versions) the proto-verbs within our chosen taxonomies, and the next circle represent that active verbs used to describe the Intended Learning Outcomes (ILO) AND the Learning and Teaching Activities (TLS), the outermost circle represents the evidence and assessment forms used to demonstrate that verb. Increasingly I’ve used this to identify educational technologies and get faculty thinking more broadly about how they can assess things online as well as in more traditional settings. The outermost circle will continue to evolve as our use of educational technologies evolves. In Constructive Alignment one might reasonably expect students’ learning activity to ‘rehearse’ the skills they are ultimately to evidence in assessment (Biggs & Collis, 1982; Boud & Falchikov, 2006) and the forms to enable that are becoming increasingly varied.
One of my favourite representations of the relationship between the knowledge dimension and the cognitive domain is from Rex Heer at Iowa State University’s Center for Excellence in Learning and Teaching (http://www.celt.iastate.edu/teaching/RevisedBlooms1.html accessed ). It’s an interactive model that articulates the relationship, as Anderson and Krathwohl saw it, rather well. My own interest, as we look to effective ILOs, is to separate out the knowledge dimension as a subject or knowledge domain and have faculty articulate this clearly for students, before reconnecting to the other domains. A process I’ll talk about subsequently.
Here are my four ‘working circles’ using adaptations of taxonomies from Anderson and Krathwohl (Knowledge and Understanding, and Cognitive), Krathwohl et al (Affective) and Dave (Psychomotor). I have adapted the Knowledge Dimension of Anderson and Krathwohl to do two things; to describe the dimension in terms of active verbs rather than as a definition of the nature of the knowledge itself, and I have incorporated a stage I believe is under represented in their articulation. I have added the ability to ‘ contextualise’ subject knowledge between the ability to specify it (Factual) and the ability to conceptualize (Conceptual). I have also rearticulated the original ‘Metacognitive’ as the ability to 'Abstract'. This will doubtless need further work. My intent is not to dismiss the valuable work already in evidence around the relationship between a knowledge dimension and the cognitive domain, rather it is to enable faculty, specifically when writing learning outcomes, to identify the subject, discipline or knowledge to be enabled in more meaningful ways.
These images are provided as JPG images. If you would like me to email the original PowerPoint slides (very low-tech!) so that you can edit, amend and enhance, I am happy to do so. I only ask that you enhance my practice by sharing your results with me.
I hope these provoke thought, reflection and comment. Feel free to use them with colleagues in discussion and let me know if there are enhancements you think would make them more useful to others.
Cognitive Domain - Intellectual Skills
Affective Domain - Professional and Personal Skills
Psychomotor Domain- Practical, Technical and Transferable Skills
Knowledge Domain - Subject and Discipline Knowledge
The next post will illustrate the usefulness of these visualisations in drafting Intended Learning Outcomes with some examples.
Can one know too much about the learning we design? Why is it we appear to know so little? It's hard to share what you can't articulate. This is an attempt to make the learning expectations, aspirations and intentions we have of learners as transparent as possible. The desire to produce a useable, intuitive (or at least helpful) toolkit to implement the SOLE model of learning design has seen several small incremental updates in 2011.
Version 2.3 of the SOLE toolkit is released today 5th September and introduces a 'modes of engagement' schematic to a new 'dashboard' sheet within the toolkit workbook. The toolkit remains a standard Microsoft Excel workbook, without macros or protected cells that any user can customise and adapt.