There are social conventions, unwritten rules, around feedback in a formal education setting. Most students associate feedback as coming from the voice of authority in the form of red marks on a written script! It is important to redefine feedback for university and professional learners.
In this short overview video (3'30") Simon outlines four 'contractual' arrangements all faculty should establish at the outset of their course or module with respect to feedback for learning.
1) ensuring that students know WHERE feedback is coming from
2) WHEN to expect feedback
3) WHAT you mean by feedback
4) WHAT to DO with the feedback when it's received.
Feedback is undoubtedly expected from the tutor or instructor but there are numerous feedback channels available to students if only they are conscious of them. These include feedback from their peers but most important from self-assessment and learning activities designed in class.
Knowing where feedback is coming from as part of the learning process relieves the pressure on the tutor and in effect makes feedback a constant 'loop', knowing what to look out for and possibly having students document the feedback they receive supports their metacognitive development.
Being clear with students as to what you regard as feedback is an effective way of ensuring that students take ownership of their own learning. My own personal definition is extremely broad, from the feedback one receives in terms of follow-up comments for anything shared in an online environment to the nods and vocal agreement shared in class to things you say. These are all feedback. Knowing that also encourages participation!
Suggesting to students what they do with feedback will depend a little bit on the nature of the course and the formal assessment processes. Students naturally enough don't do things for the sake of it so it has to be of discernable benefit to them. If there is some form of portfolio based coursework assessment you could ask for an annotated 'diary' on feedback received through the course. If its a course with strong professional interpersonal outcomes (like nursing or teaching for example) you might ask students to identify their favourite and least favourite piece of feedback they experienced during the course, with a commentary on how it affected their subsequent actions.
What's important is to recognise that there are social conventions around feedback in a formal education setting, normally associated with red marks on a written script! It is important to redefine feedback for university and professional learners.
Simon Paul Atkinson (PFHEA)
SIJEN: Consultancy for International Higher Education
I have no idea what the protocol is for naming versions of things. I imagine, like me, someone has an idea of what the stages are going to look like, when a truly fresh new is going to happen. For me I have a sense that version 4.0 of the SOLE Toolkit will incorporate what I am currently learning about assessment and 'badges', self-certification and team marking. But for now I'm not there yet and am building on what I have learnt about student digital literacies so I will settle for Version 3.5.
This version of the SOLE Toolkit 3.5.1, remains a completely free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. In version 3.0 I added more opportunities for the student to use the toolkit as an advanced organiser offering ways to record their engagement with their learning. It also added in some ability to sequence learning so that students could plan better their learning although I maintained this was guidance only and should allow students to determine their own pathways for learning.
Version 3.5 has two significant enhancements. Firstly, it introduces a new dimension, providing a rich visualization of the learning spaces and tools that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.
Secondly, it allows for a greater degree of estimation of staff workload, part of the original purpose of the SOLE Model and Toolkit project back in 2009. This faculty-time calculations in design and facilitating are based on the learning spaces and tools to be used. This function allows programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.
As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements.
Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning.
Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used
As promised this version of the SOLE Toolkit, 3.5, remain a free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. Version 3.5 has two significant enhancements.
Rich visualization of the learning spaces and tools: that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where 'tool-boredom' or 'tool-weariness' is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.
Faculty-time calculations in design and facilitating: based on the learning spaces and tools to be used there is now a function to allow programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.
This builds on newly designed functionality release in September 2014 in version 3 of the toolkit, namely;
Predicated Workload – the amount of time the designer anticipates students will spend is on activities charted.
Sequencing activities – the ability to suggest the order in which activities should be tackled. It remains an open approach and so the numbering system (letters, Roman, multiple instances of the same item) is open. It is considered important in the SOLE Model that students should take responsibility for the learning process as so the sequence should be suggestive or advised.
Completion Record – a column has been added to allow students to record whether an activity has been completed alongside indicating the amount of time was actually spent on any given activity.
Objectives Met Record – an area is included to allow students to indicate that they believe they have met the objectives for each individual topic/week.
At its core the toolkit serves to implement a model of learning based on the SOLE Model itself and it is worth reminding yourself how the model is designed to work.
Here are two short videos that detail the significant enhancement made in Version 3.5 of the Tookit.
Visualisation of Learning spaces
Calculating Faculty-Time in Design and Facilitation
Back in the late northern hemisphere summer of 2013 I drafted a background paper on the differences between Educational Data Mining, Academic Analytics and Learning Analytics. Entitled 'Adaptive Learning and Learning Analytics: a new design paradigm', It was intended to 'get everyone on the same page' as many people at my University, from very different roles, responsibilities and perspectives, had something to say about 'analytics'. Unfortunately for me I then had nearly a years absence through ill-health and I came back to an equally obfuscated landscape of debate and deliberation. So I opted to finish the paper.
I don't claim to be an expert on learning analytics, but I do know something about learning design, about teaching on-line and about adapting learning delivery and contexts to suit different individual needs. The paper outlines some of the social implications of big data collection. It looks to find useful definitions for the various fields of enquiry concerned with collecting and making something useful with learner data to enrich the learning process. It then suggest some of the challenges that such data collection involves (decontextualisation and privacy) and the opportunity it represents (self-directed learning and the SOLE Model). Finally it explores the impact of learning analytics on learning design and suggests why we need to re-examine the granularity of our learning designs.
"The influences on the learner that lay beyond the control of the learning provider, employer or indeed the individual themselves, are extremely diverse. Behaviours in social media may not be reflected in work contexts, and patterns of learning in one discipline or field of experience may not be effective in another. The only possible solution to the fragmentation and intricacy of our identities is to have more, and more interconnected, data and that poses a significant problem.
Privacy issues are likely to provide a natural break on the innovation of learning analytics. Individuals may not feel that there is sufficient value to them personally to reveal significant information about themselves to data collectors outside the immediate learning experience and that information may simply be inadequate to make effective adaptive decisions. Indeed, the value of the personal data associated with the learning analytics platforms emerging may soon see a two tier pricing arrangement whereby a student pays a lower fee if they engage fully in the data gathering process, providing the learning provider with social and personal data, as well as their learning activity, and higher fees for those that wish to opt-out of the ‘data immersion’.
However sophisticated the learning analytics platforms, algorithms and user interfaces become in the next few years, it is the fundamentals of the learning design process which will ensure that learning providers do not need to ‘re-tool’ every 12 months as technology advances and that the optimum benefit for the learner is achieved. Much of the current commercial effort, informed by ‘big data’ and ‘every-click-counts’ models of Internet application development, is largely devoid of any educational understanding. There are rich veins of academic traditional and practice in anthropology, sociology and psychology, in particular, that can usefully inform enquiries into discourse analysis, social network analysis, motivation, empathy and sentiment study, predictive modelling and visualisation and engagement and adaptive uses of semantic content (Siemens, 2012). It is the scholarship and research informed learning design itself, grounded in meaningful pedagogical and andragogical theories of learning that will ensure that technology solutions deliver significant and sustainable benefits.
To consciously misparaphrase American satirist Tom Lehrer, learning analytics and adaptive learning platforms are “like sewers, you only get out of them, what you put into them’."
Siemens, G. (2012). Learning analytics: envisioning a research discipline and a domain of practice. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (pp. 4–8). New York, NY, USA: ACM. doi:10.1145/2330601.2330605
Sharing a paper today on the visualisation of educational taxonomies. I have finally got around to putting into a paper some of the blog postings, discussion, tweets and ruminations of recent years on educational taxonomies. I am always struck in talking to US educators (and faculty training teachers in particular) of the very direct use made of Bloom's original 1956 educational taxonomy for the cognitive domain. They seem oblivious however to other work that might sit
(conceptually) alongside Bloom is a way to support their practice.
In New Zealand, whilst at Massey I got into some fascinating discussions with education staff about the blurring of the affective and cognitive domains, significant in cross-cultural education, and this led me to look for effective representations of domains. I came across an unattributed circular representation that made instant sense to me, and set about mapping other domains in the same way. In the process I found not only a tool that supported and reinforced the conceptual framework represented by Constructive Alignment, but also a visualising that supported engagement with educational technologies and assessment tools. I hope this brief account is of use to people and am, as always, very open to feedback and comment.
I'm very grateful to those colleagues across the globe who have expressed interest in using these visual representations and hope to be able to share some applicable data with everyone in due course.
I think being able to visualise things is important. Faculty and learning designers need to be able to see Intended Learning Outcomes (ILOs) take shape and mant find existing lists are uninspiring. It’s not uncommon for faculty and instructional designers to get tired and weary of ILOs; they can feel restrictive, repetitive, formulaic and sometimes obstructive. In previous posts I’ve tried to suggest that the bigger picture, the challenges of effective 21st century university level learning design, make them not only useful, but also essential. If you don't agree, don’t bother reading. I’m not going to try and persuade you. If you think there’s some truth in the argument and you want to engage with ILOs to make your teaching more focussed, your students increasingly autonomous and your graduates equipped with meaningful evidence, then I hope I have something worthwhile sharing and will welcome your thoughts.
My argument is that a module (a substantial unit of a full years undergraduate study), and the programme of which is part, should have clearly articulated outcomes in four domains:
Knowledge and understanding – or the knowledge domain
Intellectual Skills – or the cognitive domain
Professional Skills – or the affective domain
Transferable Skills – or the psychomotor domain
I’m suggesting one SHOULD expect to see a different distribution of ILOs between the outcomes in these domains depending on the focus of the module and the level of study. One might expect to see a second year anthropology module on ‘theoretical perspectives’ emphasising cognitive outcomes and a module being studied alongside it on ‘research design and techniques’ emphasising affective and psychomotor outcomes. One might reasonably expect to see more foundational ‘knowledge and understanding’ outcomes in the first year of a programme of study, and more ‘cognitive’ outcomes at the end of the programme. The lack of this 'designed articulation' in many modules undermines their value to the student and ultimately to faculty.
The basic principle is that an outcome should be assessable. Lots of great stuff can happen in your teaching and students’ learning that DOESN’T need to be assessed. It can be articulated in the syllabus, it just isn't a measured outcome. A student should be able, at the end of this course of study (module or programme), to evidence that they have attained the intended learning outcomes. This evidence has been assessed in some way and the student is then able to point to the ILOs amassed throughout their programme and say “I can demonstrate that I learnt to DO this”.
There has been a significant shift in the language we now use from the original work in the 1950s by Bloom and colleagues. The passively descriptive language of Bloom's Taxonomy has become the active language of Anderson and Krathwohl (Anderson & Krathwohl, 2001). The taxonomies have moved from Evaluation to Evaluate, from Analysis to Analyse. This is significant in that the emphasis has moved away from describing what the focus of the teaching is supposed to be, to the demonstrable outcomes of the learning.
The illustration above consists of four visual ‘wheels’ that I have used to discuss learning outcomes with faculty in the context of module and programme design at Massey University in New Zealand and at the LSE and BPP University College in the United Kingdom. These visual representations were inspired by work done elsewhere, on the cognitive domain in particular. The first documented example of this circular representation I have been able to find is attributed to Barbara Clark in 2002, but a great many people have since represented Bloom’s original, and the revised, cognitive domain in this way.
The circular representation has the higher level terms at the centre, proto-verbs if you will, surrounded by a series of active verbs that articulate actions an individual might undertake to generate evidence, of their ability to represent to proto-verb. The circular visualisation also serves to create a more fluid representation of the stages, or divisions, in the proto-verbs. Rather than a strict ‘step-by-step’ representation where one advances ‘up’ the proto-verbs, one might consider this almost like the dial on an old telephone, in every case one starts at the ‘foundational’ and dials-up though the stages to the ‘highest’ level. Each level relies on the previous. It may be implicit that to analyse something, one will already have acquired a sense of its application, and that application is grounded on subject knowledge and understanding. So the circle is a useful way of visualising the interconnected nature of the process. Most importantly in my practice, it’s a great catalyst for debate.
The circular representations of the domains and associated taxonomies also serve to make learning designers aware of the language they use. Can a verb be used at different levels? Certainly. Why? Because context is everything. One might ‘identify’ different rock samples in a first year geology class as part of applying a given classification of rocks to samples, or one might identify a new species of insect as part of postgraduate research programme. The verb on its own does not always denote level. I talk about the structure of ILOs in a subsequent post.
More recent representations have created new complex forms that include the outer circle illustrated here. I’ve found these rather useful, in part because they often prove contentious. If the inner circle represents (in my versions) the proto-verbs within our chosen taxonomies, and the next circle represent that active verbs used to describe the Intended Learning Outcomes (ILO) AND the Learning and Teaching Activities (TLS), the outermost circle represents the evidence and assessment forms used to demonstrate that verb. Increasingly I’ve used this to identify educational technologies and get faculty thinking more broadly about how they can assess things online as well as in more traditional settings. The outermost circle will continue to evolve as our use of educational technologies evolves. In Constructive Alignment one might reasonably expect students’ learning activity to ‘rehearse’ the skills they are ultimately to evidence in assessment (Biggs & Collis, 1982; Boud & Falchikov, 2006) and the forms to enable that are becoming increasingly varied.
One of my favourite representations of the relationship between the knowledge dimension and the cognitive domain is from Rex Heer at Iowa State University’s Center for Excellence in Learning and Teaching (http://www.celt.iastate.edu/teaching/RevisedBlooms1.html accessed ). It’s an interactive model that articulates the relationship, as Anderson and Krathwohl saw it, rather well. My own interest, as we look to effective ILOs, is to separate out the knowledge dimension as a subject or knowledge domain and have faculty articulate this clearly for students, before reconnecting to the other domains. A process I’ll talk about subsequently.
Here are my four ‘working circles’ using adaptations of taxonomies from Anderson and Krathwohl (Knowledge and Understanding, and Cognitive), Krathwohl et al (Affective) and Dave (Psychomotor). I have adapted the Knowledge Dimension of Anderson and Krathwohl to do two things; to describe the dimension in terms of active verbs rather than as a definition of the nature of the knowledge itself, and I have incorporated a stage I believe is under represented in their articulation. I have added the ability to ‘ contextualise’ subject knowledge between the ability to specify it (Factual) and the ability to conceptualize (Conceptual). I have also rearticulated the original ‘Metacognitive’ as the ability to 'Abstract'. This will doubtless need further work. My intent is not to dismiss the valuable work already in evidence around the relationship between a knowledge dimension and the cognitive domain, rather it is to enable faculty, specifically when writing learning outcomes, to identify the subject, discipline or knowledge to be enabled in more meaningful ways.
These images are provided as JPG images. If you would like me to email the original PowerPoint slides (very low-tech!) so that you can edit, amend and enhance, I am happy to do so. I only ask that you enhance my practice by sharing your results with me.
I hope these provoke thought, reflection and comment. Feel free to use them with colleagues in discussion and let me know if there are enhancements you think would make them more useful to others.
Cognitive Domain - Intellectual Skills
Affective Domain - Professional and Personal Skills
Psychomotor Domain- Practical, Technical and Transferable Skills
Knowledge Domain - Subject and Discipline Knowledge
The next post will illustrate the usefulness of these visualisations in drafting Intended Learning Outcomes with some examples.
Can one know too much about the learning we design? Why is it we appear to know so little? It's hard to share what you can't articulate. This is an attempt to make the learning expectations, aspirations and intentions we have of learners as transparent as possible. The desire to produce a useable, intuitive (or at least helpful) toolkit to implement the SOLE model of learning design has seen several small incremental updates in 2011.
Version 2.3 of the SOLE toolkit is released today 5th September and introduces a 'modes of engagement' schematic to a new 'dashboard' sheet within the toolkit workbook. The toolkit remains a standard Microsoft Excel workbook, without macros or protected cells that any user can customise and adapt.
The 27th Annual Distance Learning and Teaching Conference at Madison-Wisconsin this August was a diverse and varied programme attended by some 900 distance educators from all sectors, from K-12 to professional education. My contribution was a half-day DiAL-e Workshop with Kevin Burden (University of Hull) attended by some 24 people. The workshop went relatively well but also gave us an insight into a variety of cultural differences in such settings. I was able to learn from this and the 45 Minute Information Session on the SOLE model the following day definitely had a better 'buzz'. In addition I contributed to a new format this year, a 5+10 Videoshare session where participants had (supposedly!) produced a 5 minute video and then made themselves available to discuss it for 10 minutes.
All the sessions went well but the SOLE model and toolkit seemed to grab some serious interest and I will hope to have the opportunity to go back to the States and work with colleagues on learning design projects in the future.
This years ALDinHE conference had as its theme - “Engaging Students - Engaging Learning” and was a series of small, diverse but very practical sessions ranging from identifying successful work-based learning models to the effective induction of non-traditional learners. In amongst all of that I ran a small workshop on Wednesday 20th April using a single webpage on the wordpress site for the DiAL-e Project. (You are welcome to access the workshop resources if you are interested in the DiAL-e)
I had two posters at the conference, a solo effort with the SOLE model and a joint effort with Kevin Burden from the University of Hull featuring the DiAL-e framework work we have been doing since 2006. There was an excellent response to the SOLE poster and considerable interest in its potential use as a staff development stimulus. I was particularly keen to suggest it form a useful tool for course team development in the broader context of course design, but every conversation helps me refine my own ideas, which is after all why we go to these conferences!
VERSION OF THIS POST FIRST APPEARED spatkinson.wordpress.com from May 13, 2010
The following brief video presentation was prepared for a Course Team workshop at Massey University NZ in May 2010 to introduce the SOLE Model.
The SOLE model is intended to be developmental, diagnostic, evaluative and descriptive. It is borne out of a desire to make the learning design process transparent to students, to encourage staff to share ‘patterns’ of learning with each other and to provide a basis for self-evaluation and development of specific learning designs. The model is not concerned with the design of specific learning activities but rather the appropriate balance between the different modes of student engagement anticipated.
The model does not prevent an academic scheduling four hours contact time a week and delivering a didactic lecture, but it would illuminate clearly that that was the approach being undertaken. Likewise, the model in and of itself does not prevent staff from reproducing an identical pattern of learning every week through a paper or course, but again, the models’ associated toolkit would make that process clear.
The SOLE model is not prescriptive and it is possible for teams to change and modify any aspect of the toolkit to suit their needs. The intention however is to provide staff with a model of effective practice such that one might be concerned about the quality of the student learning experience if the model illustrated a consistently ‘unbalanced’ approach.
One would anticipate that the visualisation generated by the toolkit would reflect a pattern of learning that differ from paper to paper, and from week to week. One could anticipate for example that in the first week of an undergraduate paper there would be significantly more ‘teacher-centeredness’ than in the twelfth week of a postgraduate paper. The visualisation will differ; the patterns can be expected to reflect different levels of engagement.
Centrality of Biggs Constructive Alignment
It is no coincidence that the model places the intended learning outcomes (ILO) at the centre. In each constructively aligned paper the pattern will be different because the learning outcomes, the assessment designed to illicit evidence of attainment and the patterns of teaching required to support that process will each be different. The SOLE model is precisely that, a model not a template. The model can, and should be adapted by staff to suit their particular approach to learning. It should reflect the nature both of their discipline, students existing context and the specific teaching environment.