In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.
The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.
In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.
Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”
She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”
Is Artificial Intelligence challenging us to rethink the purpose of Vocational Education and Training? Perhaps that is going too far, but there are signs of questions being asked. For the last twenty five years or so there has been a tendency in most European countries for a narrowing of the aims of VET, driven by an agenda of employability. Workers have become responsible for their own employability under the slogan of Lifelong Learning. Learning to learn has become a core skill for students and apprentices, not to broaden their education but rather to be prepared to update their skills and knowledge to safeguard their employability.
It wasn’t always so. The American philosopher, psychologist, and educational reformer John Dewey believed “the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good.” The overriding theme of Dewey's work was his profound belief in democracy, be it in politics, education, or communication and journalism and he considered participation, not representation, the essence of democracy.
Faced with the challenge of generative AI, not only to the agency and motivation of learners, but to how knowledge is developed and shared within society, there is a growing understanding that a broader approach to curriculum and learning in Vocational Education and Training is necessary. This includes a more advanced definition of digital literacy to develop a critique of the outputs from Large Language Models. AI literacy is defined as the knowledge and skills necessary to understand, critically evaluate, and effectively use AI technologies (Long & Magerko, 2020) including understanding the capabilities and limitations of AI systems, recognising potential biases and ethical implications of AI-generated content and developing critical thinking skills to evaluate AI-produced information .
UNESCO says their citizenship education, including the competence frameworks for teachers and for students, builds on peace and human rights principles, cultivating essential skills and values for responsible global citizens. It fosters criticality, creativity, and innovation, promoting a shared sense of humanity and commitment to peace, human rights, and sustainable development. Fenchung Miao from UNESCO has said the AI competency framework for students proposed the term of "AI society citizenship" and provided interpretation in multiple sections. Section 1.3 of the Framework, AI Society Citizenship says:
Students are expected to be able to build critical views on the impact of AI on human societies and expand their human centred values to promoting the design and use of AI for inclusive and sustainable development. They should be able to solidify their civic values and the sense of social responsibility as a citizen in an AI society. Students are also expected to be able to reinforce their open minded attitude and lifelong curiosity about learning and using AI to support self actualisation in the AI era.
The Council of Europe says Vocational Education and Training is an integral part of the entire educational system and shares its broader aim of preparing learners not only for employment, but also for life as active citizens in democratic societies. Social dialogue and corporate social responsibility are seen as tools for democratising AI in work.
Renewing the democratic and civic mission of education underlines the importance of integrating Competences for Democratic Culture (CDC) in VET to promote quality citizenship education. This initiative aims to support VET systems in preparing learners not only for employment but also for active participation as citizens in culturally diverse democratic societies. By embedding CDC in learning processes in VET, the Council of Europe aims to ensure that VET learners acquire the necessary knowledge, skills, values and attitudes to participate fully in democratic life.
The Council of Europe Reference Framework for Democratic Culture and the Unesco AI Competence Framework can provide a focus for a wider understanding of AI competences in VET and provide a challenge for how they can be implemented in practice.
Such an understanding can shape an educational landscape that leverages AI while safeguarding human agency, motivation, and ethics. As generative AI advances, continuous dialogue and investigation among all educational stakeholders are essential to ensure these technologies enhance learning outcomes and equip students for an AI-driven future.
Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
UNESCO (2024) AI Competency Framework for Students, https://unesdoc.unesco.org/ark:/48223/pf0000391105
Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY 4.0
There's an interesting post from Philippa Hardman in her newsletter today. Entitled Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education? her research asks how much and how well do popular AI tools really support human learning and, in the process, disrupt education? She created a simple evaluation rubric to explore five key research questions:
1. Inclusion of Information
2. Exclusion of Information
3. [De]Emphasis of Information
4. Structure & Flow
5. Tone & Style
Philippa Hardman used her own research articles as the input material, which she fed into what she says are considered to be the three big AI tools for learning:
She prompted each tool in turn to read the article carefully and summarise it, ensuring that it covered all key concepts, ideas etc ensuring that I get a thorough understanding of the article and research.
She provides a detailed table of the results of each of the three applications, and additionally of the NotebookLM podcast application, assessing the strengths and weaknesses of each. she says that "while generative AI tools undoubtedly enhance access to information, they also actively “intervene” in the information-sharing process, actively shaping the type and depth of information that we receive, as well as (thanks to changed in format and tone) its meaning. "
She goes on to say:
While popular AI tools are helpful for summarising and simplifying information, when we start to dig into the detail of AI’s outputs we’re reminded that these tools are not objective; they actively “intervene” and shape the information that we consume in ways which could be argued to have a problematic impact on “learning”.
Another thing is also clear: tools like ChatGPT4o, Claude & Notebook are not yet comprehensive “learning tools” or “education apps”. To truly support human learning and deliver effective education, AI tools need to do more than provide access to information—they need to support learners intentionally through carefully selected and sequenced pedagogical stages.
Her closing thoughts are about Redefining the “Learning” Process . She says:
It’s clear that AI tools like ChatGPT, Claude, and NotebookLM are incredibly valuable for making complex ideas more accessible; they excel in summarisation and simplification, which opens up access to knowledge and helps learners take the first step in their learning journey. However, these tools are not learning tools in the full sense of the term—at least not yet.
By labelling tools like ChatGPT 4o, Claude 3.5 & NotebookLM as “learning tools” we perpetuate the common misconception that “learning” is a process of disseminating and absorbing information. In reality, the process of learning is a deeply complex cognitive, social, emotional and psychological one, which exists over time and space and which must be designed and delivered with intention.
Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0
In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."
He summarised five major AI pitfalls for education:
Fictional hype on AI’s potentials in addressing real-world challenges
Machine-centrism prevailing over human-centrism and machine agency undermining human agency
Sidelining AI’s harmful impact on environment and ecosystems
Covering up on the AI-driven wealth concentration and widened social inequality
Downgrading AI competencies to operational skills bound to commercial AI platforms
UNESCO has published five guiding principles in their AI competency framework for students: 2.1 Fostering critical thinking on the proportionality of AI for real-world challenges 2.2 Prioritizing competencies for human-centred interaction with AI 2.3 Steering the design and use of more climate-friendly AI 2.4 Promoting inclusivity in AI competency development 2.5 Facilitating transferable AI foundations for lifelong learning
And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.
For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.
That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.
AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:
Do so in almost any language of their choice
Learn anything, anytime, anywhere on mobile devices or computers
Engage in dynamic, open-ended conversations through interactive dialogue
Check their knowledge and skills on any topic
Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.
And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.
I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.
Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0
The AI in education theme continues to gather momentum, resulting in a non stop stream of journal; articles, reports, newsletters. blogs and videos. However, while not diminishing, there seem to be some subtle change in directions in the messages.
Firstly, despite many schools wary of Generative AI, there is a growing realisation that students are going to use it anyway and that the various apps claiming to check student work for AI simply don't work.
At the same time, there is an increasing focus on AI and pedagogy (perhaps linked to the increasing sophistication of Frontier Models from Gen AI but also the realisation that gimmicks like talking to an AI pretending to be someone famous from the past are just lame!). This increased focus on pedagogy is also leading to pressure to involve students. in the application of Gen AI for teaching and learning. And at recent students two ethical questions have emerged. The first is unequal access to AI applications and tools. Inside Higher Ed reports that recent research from the Public Policy Institute of California on disparate access to digital devices and the internet for K-12 students in the nation’s largest state public-school system. Put simply, they say. students who are already at an educational and digital disadvantage because of family income and first-generation constraints are becoming even more so every day as their peers embrace AI at high rates as a productivity tool—and they do not.
And while some tools will remain free, it appears that the most powerful and modern tools will increasingly come at a cost. The U.K. Jisc recently reported that access to a full suite of the most popular generative AI tools and education plug-ins currently available could cost about £1,000 (about $1,275) per year. For many students already accumulating student debt and managing the rising cost of living, paying more than $100 per month for competitive AI tools is simply not viable.
A second issue is motivation and agency for students in using AI tools. It may be that the rush to gamification, inspired by Apps like DuoLingo, is running thin. Perhaps a more subtle and sustained approach is needed to motivate learners. That may increase a focus on learner agency which in turn is being seen as linked to Explainable AI (or XAI for short). Research around Learning Analytics has pointed to the importance of students understanding the use purpose of LA but also being able to understand why the Analytics turns out as it does. And research into Personal Learning Environments has long shown the importance of learner agency in developing meta-cognition in learning. With the development of many applications for personalized learning programmes, it becomes important that learners are able to understand the reasons for their own individual learning pathways and if necessary challenge them.
While earlier debates about AI in Ed ethics, largely focused on technologies, the new debates are more focused on practices in teaching and learning using AI.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.