A.I.: Anticipating Impact of Educational Governance

It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.

Understanding media representations of “Artificial Intelligence”.

Mapping AI Types Mapping types of AI in 2023

We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.

This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.

In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.

What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.

Three ethical spheres Ethical spheres for Governing Boards to reflect on in 2023

Ethical and Moral Implications

Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.

Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.


Authenticity: honest authors, being human

I briefly had a form up on my website for people to be able to contact me if they wanted to use any of my visualisations, visuals of theory in practice. I had to take it down because ‘people’ proved incapable of reading the text above it which clearly stated it’s purpose. They insisted on trying to persuade me they had something to flog. Often these individuals, generalists, were most likely using AI to generate blog posts on some vaguely related theme.

I have rejected hundreds of approaches in recent years from individuals (I assume they were humans) who suggested they could write blogs for me. My site has always been a platform for me to disseminate my academic outputs, reflections and insights. It has never been about monetising my outputs or building a huge audience. I recognise that I could be doing a better job of networking, I am consistently attracting a couple of hundred different individuals visiting the site each week, but I am something of a misanthrope so it goes against the grain to crave attention.

We should differentiate between the spelling and grammar assistance built in to many desktop writing applications and the large language models (LLM) that generated original text based on an initial prompt. I have not been able to adjust to the nascent AI applications (Jasper, ChatGPT) in supporting my own authorship. I have used some of these applications as long-text search engine results, but stylistically it just doesn’t work for me. I use the spelling and grammar checking functionality of writing tools but don’t allow it to complete my sentences for me. I regularly use the generative AI applications to create illustrative artwork (Midjourney) and always attribute those output, just as I would if were to download someone work from Unsplash.com or other similar platforms.

For me, in 2023, the key argument is surely about the human-authenticity equation. To post blogs using more than spell and grammar checker and not declaring this authorship assistance, strike me as dishonest. It’s simply not your work, your thoughts, you haven’t constructed an argument. I want to know what you, based on your professional experience, have to say about a specific issue. Obviously I would like it to be written in flowing prose, but I can forgive clumsy language used by others and myself. If it’s yours.

It makes a difference to me knowing that a poem has been born out of 40 years of human experience rather than the product of the undoubtedly clever linguistic manipulation of large language models devoid of human experience. That is not to say that these digital artefacts are not fascinating and have no value. They are truely remarkable, that song generated by AI can be a pleasure to listen to, but not being able to relate the experiences related through song back to an individual simply makes it different. Same is true of artworks and all writing. We need to learn to differentiate between computer intelligence and human intelligence. Where the aim is for ‘augmentation’, such enhancements should be identifiable.

I want to know that if I am listening, looking or reading any artefact, that it is either generated by, or with assistance from, large generative AI models, or whether it is essentially the output of a human. This blog was created without LLM assistance. I wonder why other authors don’t declare the opposite when it’s true.

Image credit: Midjourney 14/06/23

Empower Learners for the Age of AI: a reflection

During the Empower Learners for the Age of AI (ELAI) conference earlier in December 2022, it became apparent to me personally that not only does Artificial intelligence (AI) have the potential to revolutionize the field of education, but that it already is. But beyond the hype and enthusiasm there are enormous strategic policy decisions to be made, by governments, institutions, faculty and individual students. Some of the ‘end is nigh’ messages circulating on Social Media in the light of the recent release of ChatGPT are fanciful click-bait, some however, fire a warning shot across the bow of complacent educators.

It is certainly true to say that if your teaching approach is to deliver content knowledge and assess the retention and regurgitation of that same content knowledge then, yes, AI is another nail in that particular coffin. If you are still delivering learning experiences the same way that you did in the 1990s, despite Google Search (b.1998) and Wikipedia (b.2001), I am amazed you are still functioning. What the emerging fascination about AI is delivering an accelerated pace to the self-reflective processes that all university leadership should be undertaking continuously.

AI advocates argue that by leveraging the power of AI, educators can personalize learning for each student, provide real-time feedback and support, and automate administrative tasks. Critics argue that AI dehumanises the learning process, is incapable of modelling the very human behaviours we want our students to emulate, and that AI can be used to cheat. Like any technology, AI also has its disadvantages and limitations. I want to unpack these from three different perspectives, the individual student, faculty, and institutions.

Get in touch with me if your institution is looking to develop its strategic approach to AI.

Individual Learner

For learners whose experience is often orientated around learning management systems, or virtual learning environments, existing learning analytics are being augmented with AI capabilities. Where in the past students might be offered branching scenarios that were preset by learning designers, the addition of AI functionality offers the prospect of algorithms that more deeply analyze a student’s performance and learning approaches, and provide customized content and feedback that is tailored to their individual needs. This is often touted as especially beneficial for students who may have learning disabilities or those who are struggling to keep up with the pace of a traditional classroom, but surely the benefit is universal when realised. We are not quite there yet. Identifying ‘actionable insights’ is possible, the recommended actions harder to define.

The downside for the individual learner will come from poorly conceived and implemented AI opportunities within institutions. Being told to complete a task by a system, rather than by a tutor, will be received very differently depending on the epistemological framework that you, as a student, operate within. There is a danger that companies presenting solutions that may work for continuing professional development will fail to recognise that a 10 year old has a different relationship with knowledge. As an assistant to faculty, AI is potentially invaluable, as a replacement for tutor direction it will not work for the majority of younger learners within formal learning programmes.

Digital equity becomes important too. There will undoubtedly be students today, from K-12 through to University, who will be submitting written work generated by ChatGPT. Currently free, for ‘research’ purposes (them researching us), ChatGPT is being raved about across social media platforms for anyone who needs to author content. But for every student that is digitally literate enough to have found their way to the OpenAI platform and can use the tool, there will be others who do not have access to a machine at home, or the bandwidth to make use of the internet, or even to have the internet at all. Merely accessing the tools can be a challenge.

The third aspect of AI implementation for individuals is around personal digital identity. Everyone, regardless of their age or context, needs to recognise that ‘nothing in life is free’. Whenever you use a free web service you are inevitably being mined for data, which in turn allows the provider of that service to sell your presence on their platform to advertisers. Teaching young people about the two fundamental economic models that operate online, subscription services and surveillance capitalism, MUST be part of ever curriculum. I would argue this needs to be introduced in primary schools and built on in secondary. We know that AI data models require huge datasets to be meaningful, so our data is what fuels these AI processes.


Undoubtedly faculty will gain through AI algorithms ability to provide real-time feedback and support, to continuously monitor a student’s progress and provide immediate feedback and suggestions for improvement. On a cohort basis this is proving invaluable already, allowing faculty to adjust the pace or focus of content and learning approaches. A skilled faculty member can also, within the time allowed to them, to differentiate their instruction helping students to stay engaged and motivated. Monitoring students’ progress through well structured learning analytics is already available through online platforms.

What of the in-classroom teaching spaces. One of the sessions at ELAI showcased AI operating in a classroom, interpreting students body language, interactions and even eye tracking. Teachers will tell you that class sizes are a prime determinant of student success. Smaller classes mean that teachers can ‘read the room’ and adjust their approaches accordingly. AI could allow class sizes beyond any claim to be manageable by individual faculty.

One could imagine a school built with extensive surveillance capability, with every classroom with total audio and visual detection, with physical behaviour algorithms, eye tracking and audio analysis. In that future, the advocates would suggest that the role of the faculty becomes more of a stage manager rather than a subject authority. Critics would argue a classroom without a meaningful human presence is a factory.


The attraction for institutions of AI is the promise to automate administrative tasks, such as grading assignments and providing progress reports, currently provided by teaching faculty. This in theory frees up those educators to focus on other important tasks, such as providing personalized instruction and support.

However, one concern touched on at ELAI was the danger of AI reinforcing existing biases and inequalities in education. An AI algorithm is only as good as the data it has been trained on. If that data is biased, its decisions will also be biased. This could lead to unfair treatment of certain students, and could further exacerbate existing disparities in education. AI will work well with homogenous cohorts where the perpetuation of accepted knowledge and approaches is what is expected, less well with diverse cohorts in the context of challenging assumptions.

This is a problem. In a world in which we need students to be digitally literate and AI literate, to challenge assumptions but also recognise that some sources are verified and others are not, institutions that implement AI based on existing cohorts is likely to restrict the intellectual growth of those that follow.

Institutions rightly express concerns about the cost of both implementing AI in education and the costs associated with monitoring its use. While the initial investment in AI technologies may be significant, the long-term cost savings and potential benefits may make it worthwhile. No one can be certain how the market will unfurl. It’s possible that many AI applications become incredibly cheap under some model of surveillance capitalism so as to be negligible, even free. However, many of the AI applications, such as ChatGPT, use enormous computing power, little is cacheable and retained for reuse, and these are likely to become costly.

Institutions wanting to explore the use of AI are likely to find they are being presented with additional, or ‘upgraded’ modules to their existing Enterprise Management Systems or Learning Platforms.


It is true that AI has the potential to revolutionize the field of education by providing personalized instruction and support, real-time feedback, and automated administrative tasks. However, institutions need to be wary of the potential for bias, aware of privacy issues and very attentive to the nature of the learning experiences they enable.

Get in touch with me if your institution is looking to develop its strategic approach to AI.

Image created using DALL-E

The threat to the integrity of educational assessments is not from ‘essay mills’ but from Artificial Intelligence (AI)

The threat to the integrity of educational assessments is no longer from ‘essay mills’ and contract cheating but from Artificial Intelligence (AI).

It is not so long ago that academics complained that essay mills, ‘contract cheating’ services, and commercial companies piecing together ‘bespoke’ answers to standard essay questions, were undermining the integration of higher education’s assessment processes. The outputs of these less than ethically justifiable endeavours tried to cheat the plagiarism detection software (such as Turnitin and Urkund) that so many institutions have come to rely on. This reliance, in part the result of the increase in the student-tutor ratio, the use of adjunct markers and poor assessment design, worked for a while. It no longer works particularly well.

If you are interested in reviewing your programme or institutional assessment strategy and approaches please get in touch. This consultancy service can be done remotely. Contact me

Many institutions sighed with relief when governments began outlawing these commercial operations (in April 2022 the UK passed the ‘Skills and Post-16 Education Act 2022’ following NZ and Australian examples) and went back to the business-as-usual. For the less enlightened this meant a return to setting generic questions, decontextualised, knowledge recitation essay tasks. Some have learnt to at least require a degree of contextualisation of their students’ work, introduced internal self-justification and self-referencing, requiring ‘both sides’ arguments rather than declared positions, and applied the ‘could this already have been written’ test in advance. Banning essay mills, or ‘contract cheating’, is necessary, but it is not enough to secure the integrity of assessment regimes.

Why students plagiarise is worthy of its own post, but suffice it to say it varies greatly depending on the student. A very capable student may simply be terrible at time management and fear running out of time or feel the assessment is unworthy of them. Another student may be fearful of their ability to express complex arguments and in pursuit of the best possible grade, plagiarise. Some may simply have not learnt to cite and reference, or to appreciate that rewording someone else’s thoughts without attributing them also constitutes plagiarism. And there is that category of students whose cultural reference point, deference to ‘the words of the master’, make plagiarism conceptually difficult for them to understand.

I remember receiving my most blatant example of plagiarism and academic malpractice back in 2006. A student submitted a piece of work that included 600 words copied wholesale from Wikipedia, complete with internal bookmarks and hyperlinks. I suspect the majority of students are now sufficiently digitally literate not to make that mistake, but how many are also now in a position to do what the essay mills used to do for them, stitch together, paraphrase and redraft existing material using freely available AI text generation tools.

As we encourage our students to search the web for sources, how easy is it for them now to access some of the easily accessible, and often free, online tools? These tools include https://app.inferkit.com/demo which allows you to enter a few sentences and then generate longer texts on the basis of that origin. You can enter merely a title, of at least five words, or a series of sentences into https://smodin.io/writer and have it generate a short essay, free of references. Professional writing tools aimed at marketers, such as https://ai-writer.com, would cost a subscriber to be effective but would allow students to generate passable work. This last tool actually tells you the sources from which its abstractions have been drawn, including academic journals.

You might find it enlightening to take something you have published and put it through one of these tools and evaluate the output.

It is insufficient to ask the student to generate their own question, or even to ask the student to contextualise their own work. Some of the emergent AI tools can take account of the context. There is a need to move away from the majority of long-form text assessments. With the exception of those disciplines where writing more than a thousand words at once is justifiable (journalism, policy studies, and some humanities subjects), there is a need to make assessments as close to real-world experience as possible. It needs to be evidently the product of an individual.

Paraphrasing is a skill. A valuable one in a world where most professions do not lack pure information. The issue is to evaluate the quality of that information and then be able to reduce it to a workable volume.

I’ve worked recently with an institution reviewing its postgraduate politics curriculum. I suggested that rather than try and stop students from ‘cheating’ by paraphrasing learned texts, they should encourage the students to learn what they need to do to enhance the output of these AI tools. Using one of these tools to paraphrase, and essentially re-write, a WHO report for health policy makers made it more readable, it also left out certain details that would be essential for effective policy responses. Knowing how to read the original, use a paraphrasing tool, and being able to explore the deficiencies of its output and correct them, was a useful skill for these students.

We cannot stop the encroachment of these kinds of AI text manipulation tools in higher education, but we can make their contemporary use more meaningful to the student.

If you are interested in reviewing your programme or institutional assessment strategy and approaches please get in touch. This consultancy service can be done remotely. Contact me.

Image was generated by DALL-e

%d bloggers like this: