Since October 17th 2012 [see updates] when I shared the most recent work on visualising taxonomies in a circular form and aligning these active verb patterns to particular assessment forms, I have had some great feedback - for which thank you. As a consequence, I have made a few clarifications which I hope will help those of you who want to use these visualisations in your conversations with peers or in academic educational development sessions. The biggest change has been to 'turn' the circles through 72' clockwise so that the vertical denotes a "12-noon" start. I hesitate about this because it perhaps over stresses our obsession which mechanical process which isn't my intention, but many said they would prefer this and so here it is. The second change has been to review, in the light of my own use, and some literature sources (noted on the images themselves) some of the active verbs and evidence.
I am very grateful for the feedback and hope to receive more. In answer to the question about citing this work; there is a journal article and a book chapter in the works, in the meantime please feel free to cite the blog posts. Or indeed personal correspondence at email@example.com if you would like to share how these may be working for you in practice.
Click on the images to get a decent quality print version - please email if you would like the original PowerPoint slide to amend and modify.
Cognitive Domain - Circle - Taxonomy - Version 4 - November 2012 (Intellectual Skills)
Affective Domain - Circle - Taxonomy - Version 4 - November 2012 (Professional and Personal Skills)
Psychomotor Domain - Circle - Taxonomy - Version 4 - November 2012 (Transferable Skills)
Knowledge Domain - Circle - Taxonomy - Version 2 - November 2012 (Subject/Discipline Skills)
This representation is perhaps the most 'controversial' as it represents the 'knowledge dimension' articulated by Anderson and colleagues as a separate domain. For the purposes of working with subject-centric academics within their disciplines as they write intended learning outcomes and assessment, I have found this a useful and sensible thing to do. I have separated out the notion of 'contextual knowledge' which is also not going to please everyone.
I hope these representations are of some use to you in your practice. Simon (13 Nov 2012)
I think being able to visualise things is important. Faculty and learning designers need to be able to see Intended Learning Outcomes (ILOs) take shape and mant find existing lists are uninspiring. It’s not uncommon for faculty and instructional designers to get tired and weary of ILOs; they can feel restrictive, repetitive, formulaic and sometimes obstructive. In previous posts I’ve tried to suggest that the bigger picture, the challenges of effective 21st century university level learning design, make them not only useful, but also essential. If you don't agree, don’t bother reading. I’m not going to try and persuade you. If you think there’s some truth in the argument and you want to engage with ILOs to make your teaching more focussed, your students increasingly autonomous and your graduates equipped with meaningful evidence, then I hope I have something worthwhile sharing and will welcome your thoughts.
My argument is that a module (a substantial unit of a full years undergraduate study), and the programme of which is part, should have clearly articulated outcomes in four domains:
Knowledge and understanding – or the knowledge domain
Intellectual Skills – or the cognitive domain
Professional Skills – or the affective domain
Transferable Skills – or the psychomotor domain
I’m suggesting one SHOULD expect to see a different distribution of ILOs between the outcomes in these domains depending on the focus of the module and the level of study. One might expect to see a second year anthropology module on ‘theoretical perspectives’ emphasising cognitive outcomes and a module being studied alongside it on ‘research design and techniques’ emphasising affective and psychomotor outcomes. One might reasonably expect to see more foundational ‘knowledge and understanding’ outcomes in the first year of a programme of study, and more ‘cognitive’ outcomes at the end of the programme. The lack of this 'designed articulation' in many modules undermines their value to the student and ultimately to faculty.
The basic principle is that an outcome should be assessable. Lots of great stuff can happen in your teaching and students’ learning that DOESN’T need to be assessed. It can be articulated in the syllabus, it just isn't a measured outcome. A student should be able, at the end of this course of study (module or programme), to evidence that they have attained the intended learning outcomes. This evidence has been assessed in some way and the student is then able to point to the ILOs amassed throughout their programme and say “I can demonstrate that I learnt to DO this”.
There has been a significant shift in the language we now use from the original work in the 1950s by Bloom and colleagues. The passively descriptive language of Bloom's Taxonomy has become the active language of Anderson and Krathwohl (Anderson & Krathwohl, 2001). The taxonomies have moved from Evaluation to Evaluate, from Analysis to Analyse. This is significant in that the emphasis has moved away from describing what the focus of the teaching is supposed to be, to the demonstrable outcomes of the learning.
The illustration above consists of four visual ‘wheels’ that I have used to discuss learning outcomes with faculty in the context of module and programme design at Massey University in New Zealand and at the LSE and BPP University College in the United Kingdom. These visual representations were inspired by work done elsewhere, on the cognitive domain in particular. The first documented example of this circular representation I have been able to find is attributed to Barbara Clark in 2002, but a great many people have since represented Bloom’s original, and the revised, cognitive domain in this way.
The circular representation has the higher level terms at the centre, proto-verbs if you will, surrounded by a series of active verbs that articulate actions an individual might undertake to generate evidence, of their ability to represent to proto-verb. The circular visualisation also serves to create a more fluid representation of the stages, or divisions, in the proto-verbs. Rather than a strict ‘step-by-step’ representation where one advances ‘up’ the proto-verbs, one might consider this almost like the dial on an old telephone, in every case one starts at the ‘foundational’ and dials-up though the stages to the ‘highest’ level. Each level relies on the previous. It may be implicit that to analyse something, one will already have acquired a sense of its application, and that application is grounded on subject knowledge and understanding. So the circle is a useful way of visualising the interconnected nature of the process. Most importantly in my practice, it’s a great catalyst for debate.
The circular representations of the domains and associated taxonomies also serve to make learning designers aware of the language they use. Can a verb be used at different levels? Certainly. Why? Because context is everything. One might ‘identify’ different rock samples in a first year geology class as part of applying a given classification of rocks to samples, or one might identify a new species of insect as part of postgraduate research programme. The verb on its own does not always denote level. I talk about the structure of ILOs in a subsequent post.
More recent representations have created new complex forms that include the outer circle illustrated here. I’ve found these rather useful, in part because they often prove contentious. If the inner circle represents (in my versions) the proto-verbs within our chosen taxonomies, and the next circle represent that active verbs used to describe the Intended Learning Outcomes (ILO) AND the Learning and Teaching Activities (TLS), the outermost circle represents the evidence and assessment forms used to demonstrate that verb. Increasingly I’ve used this to identify educational technologies and get faculty thinking more broadly about how they can assess things online as well as in more traditional settings. The outermost circle will continue to evolve as our use of educational technologies evolves. In Constructive Alignment one might reasonably expect students’ learning activity to ‘rehearse’ the skills they are ultimately to evidence in assessment (Biggs & Collis, 1982; Boud & Falchikov, 2006) and the forms to enable that are becoming increasingly varied.
One of my favourite representations of the relationship between the knowledge dimension and the cognitive domain is from Rex Heer at Iowa State University’s Center for Excellence in Learning and Teaching (http://www.celt.iastate.edu/teaching/RevisedBlooms1.html accessed ). It’s an interactive model that articulates the relationship, as Anderson and Krathwohl saw it, rather well. My own interest, as we look to effective ILOs, is to separate out the knowledge dimension as a subject or knowledge domain and have faculty articulate this clearly for students, before reconnecting to the other domains. A process I’ll talk about subsequently.
Here are my four ‘working circles’ using adaptations of taxonomies from Anderson and Krathwohl (Knowledge and Understanding, and Cognitive), Krathwohl et al (Affective) and Dave (Psychomotor). I have adapted the Knowledge Dimension of Anderson and Krathwohl to do two things; to describe the dimension in terms of active verbs rather than as a definition of the nature of the knowledge itself, and I have incorporated a stage I believe is under represented in their articulation. I have added the ability to ‘ contextualise’ subject knowledge between the ability to specify it (Factual) and the ability to conceptualize (Conceptual). I have also rearticulated the original ‘Metacognitive’ as the ability to 'Abstract'. This will doubtless need further work. My intent is not to dismiss the valuable work already in evidence around the relationship between a knowledge dimension and the cognitive domain, rather it is to enable faculty, specifically when writing learning outcomes, to identify the subject, discipline or knowledge to be enabled in more meaningful ways.
These images are provided as JPG images. If you would like me to email the original PowerPoint slides (very low-tech!) so that you can edit, amend and enhance, I am happy to do so. I only ask that you enhance my practice by sharing your results with me.
I hope these provoke thought, reflection and comment. Feel free to use them with colleagues in discussion and let me know if there are enhancements you think would make them more useful to others.
Cognitive Domain - Intellectual Skills
Affective Domain - Professional and Personal Skills
Psychomotor Domain- Practical, Technical and Transferable Skills
Knowledge Domain - Subject and Discipline Knowledge
The next post will illustrate the usefulness of these visualisations in drafting Intended Learning Outcomes with some examples.
In my last posting I suggested that a module specification could usefully have four sections, clearly articulated, for Intended Learning Outcomes, so that a student could identify from their assessment evidence that they had met specific ILOs in a range of domains. In doing so they not only have a useful platform to identify future learning needs, but also the potential to negotiate the accreditation of prior accredited learning in a much more fine-grained and meaningful way, something I fully expect to become a significant future of international higher education accords in the next few years as institutions face up to the challenge of accredited OER schemes and credit bearing MOOCs. I believe the design of intended learning outcomes for modules and programmes will become a strategic priority.
Not everyone agrees ILOs are effective and a useful critique from Hussey and Smith is well worth reading (Hussey & Smith, 2002).
How many Intended Learning Outcomes (ILOs) one designs into a module or a programme level specification has to depend on the scope of the module or programme itself. I’m sure colleagues can adapt what I’m saying here to their own quality assurance and institutional contexts.
For the purpose of this reflection let me take a single module, worth 15 credits. In the UK context this would frequently represent one-eighth of a stage of undergraduate degree study, there being three stages each representing 120 credits. Again, in the UK context there is a strong notion of progression in higher order thinking skills between the first stage of undergraduate study (level 4) and the final stage (level 6). This progression is articulated in generic guidance that captures much of this ILO debate and in subject specific guidance drawing on the discipline communities to create ‘benchmarks’ for what be expected to be in any named award (www.qaa.ac.uk) . Level 5 would represent the second stage of undergraduate study in the UK context, the equivalent of an exit point for a Higher National Diploma or a Foundation Degree, the European Qualifications Framework Level 5 and within the EHEA (Bologna) sometimes referred to as a ‘Short Cycle’ award.
My example then is for a 15-credit module at level 5. The UK quality assurance agency does not specify periods of study for credit, but sector norms talk in terms of notional study hours and it is perhaps helpful therefore to think of 15 credits as 150 notional study hours, 30 credits as 300 notional study hours and so on.
Before proposing a model for ‘how many’, I will briefly remind myself what these four sections, or domains, of Intended Learning Outcomes represent. They are;
Knowledge and understanding – subject domain
Intellectual Skills – or the cognitive domain
Professional Skills – or the affective domain
Transferable Skills – or the psychomotor domain
Knowledge and understanding – subject domain
The subject domain is often conflated with the cognitive domain, which is understandable as it is within Bloom’s ubiquitous taxonomy, but this does tend to confuse faculty as to the distinction between knowing and understanding a body of factual knowledge and being able to do something with that factual knowledge. The Subject domain can, and in my opinion should, be limited to defining the subject area for illustrative purposes for the student. Since the principle is that all Intended Learning Outcomes should be assessed and it is actually rather difficult to assess whether someone ‘understands’ something without having them ‘operationalize’ the knowledge, I tend not to get too hung up on the active verbs used in this domain, contenting myself it serves to contextualise what follows, but maybe I should and another post later will unpack Anderson and Krathwohl's Knowledge Dimension in more detail.
Intellectual Skills / Cognitive domain
This domain refers to ‘knowledge structures’ building from the base of the Subject domain, the “knowing the facts”, towards high order thinking skills in which these facts become operationalized and transferable. This domain is familiar to most faculty and synonymous with the work of Bloom from the 1950s (Bloom, 1984) and the useful revisions made in 2001 (Anderson & Krathwohl, 2001).
Professional Skills / Affective domain
The affective is concerned with an individual’s values, and includes their abilities with respect to self perception through to abstract empathetic reasoning. In an extension to the early work by Bloom progressive stages take the learner from foundational ‘receiving’, through to the ‘internalization’ of personal value systems (Krathwohl, Bloom, & Masia, 1999). In the context of Higher Education programmes, particularly an era when the employability of graduates is stressed, an awareness of these professional values would do well to be built into the relevant modules.
Transferable Skills / Psychomotor domain
The psychomotor domain is less well researched and documented and this has meant a less than adequate recognition and incorporation into learning designs. Frequently tactile or technical skills become seen as ‘general skills’ or ‘transferable skills’ and there is little sense of progression. This domain refers to progressively complex manual or physical skills and so could identify the progressively complex skills of a biologist in using microscopes, or an economist using a statistics software package (Dave, 1967). I find this domain unfortunately neglected as I believe it would enhance course designs if note were taken of the practical technical skills required within disciplines and their articulation in Intended Learning Outcomes.
The Balance of Numbers
The actual balance between these domains in terms of how many Intended Learning Outcomes one might assign to them in the context of a 15 credit module will depend on the context of the module, its mode and its programme context. One might reasonably expect to see some differences in the balance of ILOs in modules in different contexts, illustrated below.
Level 5 University class taught Module
Work-based Level 5 Management Module
Practice-based Level 5 lab taught
Intellectual Skills (cognitive)
Professional Skills (affective)
Transferable Skills (psychomotor)
And for those who appreciate a visual representation:
In this example each module has ten Intended Learning Outcomes but the emphasis within the module will change. Whilst it may be appropriate to stress intellectual skills (analysis, synthesis, evaluation) in a classroom based political science course for example, on might expect to see transferable skills (often described as practical, tactile or technical skills) stressed in a technical lab based course, skills such as manipulation, articulation and naturalisation of technical proficiency.
All too often Higher Education stresses the cognitive, over reliant perhaps on Bloom’s taxonomy and related work, and neglects the affective and psychomotor domains. This is has several consequences; it relegates anything that is not seen as ‘intellectual’ to a lower order of skills despite the fact that employers and students recognise and demand the need for broader skills (Mason, Williams, & Cranmer, 2006). In doing so it forces programme leaders into ‘bolt-on’ skills modules that demand additional institutional resource and student resource and frequently ill-serve the purpose. No learning design is truly student-centred if it is neglecting other domains of experience (Atkinson, 2011).
The model advocated here separates the knowledge domain and the intellectual skills, focussing the module designer on the ‘skills’ that will be acquired independent of the subject knowledge acquired. This, along with a focus on the affective and psychomotor skills, provides a framework for a module that is balanced in terms of what the student does, the context in which they do it, and correctly assessed ensures all these intended learning outcomes can be justifiably claimed in the student’s transcript.
Indeed it is not difficult to imagine a student coming to the end of the first stage of their degree, recognising that they have excelled in the psychomotor skills but struggled in the cognitive, and make module choices for future stages either to redress that balance or acknowledge their strengths and adjust choices to reflect future career path.
So how do you write learning outcomes across these four domains? That’s the subject of the next posting.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing : a revision of Bloom’s taxonomy of educational objectives. New York: Longman.
Atkinson, S. (2011). Embodied and Embedded Theory in Practice: The Student-Owned Learning-Engagement (SOLE) Model. The International Review of Research in Open and Distance Learning, 12(2), 1–18.
Bloom, B. S. (1984). Taxonomy of Educational Objectives Book 1: Cognitive Domain (2nd ed.). Addison Wesley Publishing Company.
Dave, R. (1967). Psychomotor domain. Presented at the International Conference of Educational Testing, Berlin.
Hussey, T., & Smith, P. (2002). The Trouble with Learning Outcomes. Active Learning in Higher Education, 3(3), 220–233.
Krathwohl, D. R., Bloom, B. S., & Masia, B. B. (1999). Taxonomy of Educational Objectives Book 2/Affective Domain (2nd ed.). Longman Pub Group.
Mason, G., Williams, G., & Cranmer, S. (2006). Employability Skills Initiatives in Higher Education: What Effects Do They Have On Graduate Labour Market Outcomes? National Institute of Economic and Social Research. Retrieved October 14th 2012 from http://ideas.repec.org/p/nsr/niesrd/280.html
MOOCs, self generated OER based curricula, kite-marking schemes, and elaborate credit transfer schemes are a reality in increasingly complex higher education sector. Students often pursuing studies from within the world of work where physical mobility of employable precludes commitment to a single campus based programme over four years require well defined, constructively aligned, module designs. Clever module design means clever programme design, clever portfolios and successful institutions. Learning design is no longer just an issue for the Quality Office; the Strategy people are beginning to care too.
The vast majority of UK Universities now are able to produce detailed module and programme specifications for their teaching programmes. Specification templates usually detail the aims and objectives, resources, indicative scheme of work, staffing and mode of delivery. They also routinely use a template to generate the Intended Learning Outcomes (ILO) for the module or programme. Frequently divided into three or four sections covering, knowledge and understanding, intellectual skills (cognitive domain), professional and practical skills (affective domain) and general transferable skills (psychomotor skills), these templates are completed with varying degrees of comprehension as module validation panels will attest.
The logic is that to achieve a well-structured and constructively aligned curricula, the module team should determine what the ILOs for the module are to be (Biggs & Tang, 2007). What will the learner be able to do at the end of the module? Having determined the ILOs the team would then determine how they would enable the student to demonstrate achievement of the outcomes and draft an appropriate assessment strategy. Then, and only then, the module design team would look at what the student needed to be able to demonstrate and work out what was needed as input. Outcomes first, assessment second, teaching inputs third.
It’s not an easy thing to do. As teachers we’re passionate about our subjects, anxious to impart what we know is important, what ‘did it for us’, and at some point in this process many faculty will ‘go native’, reach for the seminal text (or the nearest thing to it, their own book), and start thinking about what the students need to know. This can of course produce fantastic learning experiences and there are a great many exciting modules drafted on the backs of envelopes without specification templates. They don’t make for effective records of achievement however.
Accreditation of prior accredited learning has always been a challenge. An effective template for module and programme design makes a significant difference. Students should be able to identify from their transcript exactly what it is they can evidence as intended learning outcomes. I would argue further that phases in learning and teaching activity should also have notable objectives that map directly to the ILOs (See the SOLE model described in Atkinson, 2011).
So how many intended learning outcomes, how many affective, how many cognitive, how many is too many? My next post will be my reasoning on that issue.
Atkinson, S. (2011). Developing faculty to integrate innovative learning in their practice with the SOLE model. In S. Ferris (Ed.), Teaching, Learning and the Net Generation: Concepts and Tools for Reaching Digital Learners. Hershey, PA: IGI Global.
Biggs, J., & Tang, C. (2007). Teaching for Quality Learning at University: What the Student does (3rd ed.). Buckingham. GB: Open University Press.
An online discussion forum should be an effective way of engaging students in careful and considered reflection, yet often they represent time-consuming and frustrating experiences for faculty. Getting students to share thoughts and ideas, balancing contributions and knowing when to stimulate, moderate or step-back can be challenging. I’ve long found the advice to faculty, much of it still rooted in Computer Mediated Conferencing (CMC) models of the early mid 1990s-mid 2000s, unresponsive to the context of learning and the changing nature, and expectations, of learners.
It is remarkable how quickly Gilly Salmon’s Five-Step-Model (Salmon, 2000) became for the majority what Stephen D. Brookfield might describe as a paradigmatic assumption (Brookfield, 1995). The need for familiarity with the tool or context, a first-step, is itself now profoundly complex. Learners have hugely differing understandings of the function, and etiquette, ‘within‘ a given online communication tool. One of Brookfield’s lovely example of ‘hunting assumptions’ is the illustration of a common-sense assumption that to circulate around the classroom having assigned group tasks shows engagement, interest and commitment, whilst he suggests it may well be interpreted as a lack of trust and distorts learner responses to the task in hand. How quickly have we adopted Salmon’s notion of responsiveness, encouraging faculty to respond to each student posting, as common-sense, our paradigmatic assumption. Salmon herself is not to blame for this, the context of her original model had a very different digital landscape to underpin it, that notion of personal presence less well articulated, learners experiences of commenting, rating, sharing, reviewing in a myriad of different online contexts is something ‘new’.
So this week, while I watch colleagues around the globe initiate MOOCs and discuss global OER standards, I set myself a more modest task. I wanted to explore what we thought worked well in online threaded text-discussion and why there appeared to see an enduring negativity, from students and staff, about the dreaded ‘discussion board’.
I set up a one week asynchronous online continuing professional development (CPD) workshop for faculty and learning support staff discussion effective thread text-based discussions online (we have Blackboard 7). The workshop was to run during the working week, from Monday 9am to Friday 5pm, with a ‘new’ topic added each day and an encouragement for colleagues to ‘dip-in-and-out’. What was different from our usual institutional practice was we did it in Voicethread (www.voicethread.com).
A VoiceThread is a collaborative, multimedia slide-shot that holds images, documents, and videos and allows people to navigate slides and leave comments in four different ways. Users can use text, voice using their computer microphone, upload a pre-recorded audio file or capture a video on their webcam. In the US they can also use the telephone.
Users can also draw on the slides while commenting, allowing them to annotate a diagram, image or schematic; they can also zoom-in to the image for more detail. Users can also create multiple identities, allowing them for instance to take on a group leadership role whilst remaining a member of the group, or adopt more playful and creative personalities. VoiceThreads can be embedded in the existing VLE or another web page and can also be archived. It is primarily a web browser based tool, now also available on iOS mobile devices and available in an accessible screen reader version or very low bandwidth version, and users have no software to install.
The tool itself has strengths and weaknesses and, whilst I declare it is a tool I’ve used and evaluated previously (Burden & Atkinson, 2010), I am not seeking to promote it. Rather what I wanted to do was to take faculty away from their existing assumptions about discussion threads and have the conversation in a very different context. As we explored each day a different theme it proved remarkable how some contributions followed the ‘discussion board’ convention, whilst others playfully sought to exploit the new technological opportunities the environment afforded them. One colleague made a series of short, positive and responsive contributions in response to others, what might be seen as rather appropriate netiquette in online discussions. However, because the comments were all appended to the end of a timeline, those responses (unless clearer ‘tagged’ as such) appeared as ‘hanging interjections’ without context. None of the 79 contributions during the week, except for my own, used anything but text which I felt was disappointing.
In coming weeks I will analyse the pattern and nature of the responses in more detail, and critique my own model of facilitation in this context, but what has emerged immediately is how quickly some ‘assumptions’ have been set and are subsequently modelled even when users find themselves in a different communication context. This is perhaps one of our biggest challenges as educational technologists and developers, or instructional designers, is to recognise our quickly solidifying paradigmatic assumptions and move beyond them. The digi-ecology is in constant flux and we need to consistently challenge how we do what we do.
Brookfield, S. D. (1995). Becoming a Critically Reflective Teacher. John Wiley & Sons.
Burden, K., & Atkinson, S. (2010). De-Coupling Groups in Space and Time: Evaluating New Forms of Social Dialogue for Learning. In L. Shedletsky & J. E. Aitken (Eds.), Cases on Online Discussion and Interaction (pp. 141–158). Hershey, Pennsylvania: IGI Global.
Salmon, G. (2000). E-moderating : the key to online teaching and learning. London: Kogan Page.
The EdMediaShare site from JISC Digital Media is developing some serious traction to support useful and usable video content. I think it has proved itself as a ‘proof of concept’.
The EdMedia Share site allows for the sharing of educationally useful content, provide opportunities for ‘finding’ content and allows for ‘critique’. The browsing by discipline and collection is very useful. We hoped the search by ‘learning design’ will also prove a successful attribute and are thrilled to see the DiAL-e being integrated. The original ‘community’ site on the JISC pages as part of the original 2006 project was less successful than we would have liked so hopefully EdMediaShare will fill that gap. It was always our intention that the framework would create a community of practice.
The challenge faced by the Open Educational Resources University is not translation, context or learning styles, it is not a question of interoperability of learning environments or granularity of learning objects, SCORM compliance or IMS standards; it is perhaps rather a question of academic identity. We have a lot of identify work to do.
I remember first meeting Wayne Mackintosh at the Third Pan-Commonwealth Forum on Open Learning (PCF3) in Dunedin, New Zealand, in July 2004, hosted by DEANZ, the Government of New Zealand and the Commonwealth of Learning (COL). The theme for PCF3 was "Building Learning Communities for Our Millennium: Reaching Wider Audiences through Innovative Approaches". Wayne was convener of our thoughtful, diverse, varied and anarchic think-tank. We talked about kite-marks, quality standards, intellectual colonialism and poverty. The seeds of the OERu were visible then.
It is no surprise to see Wayne Mackintosh, Director OER Foundation, leading the initiative for Open Educational Resources University, OER university (#oeru). The initiative is raising some interesting critiques, and questions, which Tony Bates summarizes succinctly. But these are still issues of institutional norms, governmental process and sectorial quality assurance. I sense we are asking a lot of people.
There are people in the world who are good at facilitating learners' encounters with new concepts and ideas, there are people who can enthuse, capture and motivate; and there are those who write, design, narrate and structure learning in meaningful ways. It is as often pride, as much as institutional conventions, that gives rise to academics' conviction that they must fulfill all these roles. Whilst an academic was once the guardian, seeker, generator and clarifier of the codex of knowledge in their domain, they are now primarily its steward and pride is best placed in a more defined function.
That knowledge was once defined in terms of individual libraries, writings and musings suggests only that it was confined to the means available to communicate it. To consider it now practical, or realistic, for an individual to hold the key to a domain of knowledge is nothing less than a delusion born of vanity.
The world of knowledge creation, dissemination and propagation has changed radically in the last 30 years, and with it, academic identity. It is simply illogical, not to say inefficient, to expect a single academic to research, write, and teach all the content for their university courses. What the OER movement represents is a 21st century model of knowledge propagation, a contemporary revisioning of the master-pupil relationship, and a means of making learning accessible beyond the single, constrained, voice of the solipsistic academic.
For the faculty that make-up our institutions to accept their emerging role as validators of the learning that happens without them necessarily ‘teaching’ what is validated, and teaching what is validated by others….. that is a huge leap into the unknown, and that surely, is the biggest challenge facing the OERu.
Relocated Adobe Presenter ! Here's one that went a little astray when we moved the DiAL-e space from a Wetpaint Wiki to WordPress. This is the Adobe Presenter Stand-alone presentation made for the JISC-UK Innovating e-Learning online Conference, October 2008. Whilst the framework has certainly matured in the last three years, this still serves as a good overview to the framework itself and some of its early exemplars.
Can one know too much about the learning we design? Why is it we appear to know so little? It's hard to share what you can't articulate. This is an attempt to make the learning expectations, aspirations and intentions we have of learners as transparent as possible. The desire to produce a useable, intuitive (or at least helpful) toolkit to implement the SOLE model of learning design has seen several small incremental updates in 2011.
Version 2.3 of the SOLE toolkit is released today 5th September and introduces a 'modes of engagement' schematic to a new 'dashboard' sheet within the toolkit workbook. The toolkit remains a standard Microsoft Excel workbook, without macros or protected cells that any user can customise and adapt.
Following the presentation of the SOLE model and toolkit at Madison-Wisconsin in August 2011, a number of conversations about the ‘diagnostic’ function of the SOLE toolkit have taken place.
One of the concerns of faculty and students is contact time. How much contact time am I being offered (versus how much I take advantage of), what other opportunities for facilitated guidance do I have. Why indeed, do I as the student not recognise the ‘directed’ learning I have been pointed to, and in the case of the SOLE approach, the entire toolkit forms an advanced organiser that demonstrates the consideration faculty have given to my learning time as a student.
The version of the Toolkit presented at the 27th Distance Education Conference at Madison Wisconsin broke down the learning engagements students were being asked to complete under the 9 elements of the model into 5 areas, or modes, of engagement. These are exploring the notions of learning engagement as reflective, introspective, social, facilitated and directed. In the next version of the toolkit I’m exploring a ‘dashboard’.
The Dashboard is a separate sheet that simply presents an overview, to faculty and potentially students, of the modes of learning being designed for the student. It shows at a glance, alongside the full profile of activities for each week or unit, a schematic that illustrates how much of the activity is facilitated, directed, and so on.