In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.
The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.
In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.
Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”
She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”
In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.
The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.
In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.
Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”
She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”
Is Artificial Intelligence challenging us to rethink the purpose of Vocational Education and Training? Perhaps that is going too far, but there are signs of questions being asked. For the last twenty five years or so there has been a tendency in most European countries for a narrowing of the aims of VET, driven by an agenda of employability. Workers have become responsible for their own employability under the slogan of Lifelong Learning. Learning to learn has become a core skill for students and apprentices, not to broaden their education but rather to be prepared to update their skills and knowledge to safeguard their employability.
It wasn’t always so. The American philosopher, psychologist, and educational reformer John Dewey believed “the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good.” The overriding theme of Dewey's work was his profound belief in democracy, be it in politics, education, or communication and journalism and he considered participation, not representation, the essence of democracy.
Faced with the challenge of generative AI, not only to the agency and motivation of learners, but to how knowledge is developed and shared within society, there is a growing understanding that a broader approach to curriculum and learning in Vocational Education and Training is necessary. This includes a more advanced definition of digital literacy to develop a critique of the outputs from Large Language Models. AI literacy is defined as the knowledge and skills necessary to understand, critically evaluate, and effectively use AI technologies (Long & Magerko, 2020) including understanding the capabilities and limitations of AI systems, recognising potential biases and ethical implications of AI-generated content and developing critical thinking skills to evaluate AI-produced information .
UNESCO says their citizenship education, including the competence frameworks for teachers and for students, builds on peace and human rights principles, cultivating essential skills and values for responsible global citizens. It fosters criticality, creativity, and innovation, promoting a shared sense of humanity and commitment to peace, human rights, and sustainable development. Fenchung Miao from UNESCO has said the AI competency framework for students proposed the term of "AI society citizenship" and provided interpretation in multiple sections. Section 1.3 of the Framework, AI Society Citizenship says:
Students are expected to be able to build critical views on the impact of AI on human societies and expand their human centred values to promoting the design and use of AI for inclusive and sustainable development. They should be able to solidify their civic values and the sense of social responsibility as a citizen in an AI society. Students are also expected to be able to reinforce their open minded attitude and lifelong curiosity about learning and using AI to support self actualisation in the AI era.
The Council of Europe says Vocational Education and Training is an integral part of the entire educational system and shares its broader aim of preparing learners not only for employment, but also for life as active citizens in democratic societies. Social dialogue and corporate social responsibility are seen as tools for democratising AI in work.
Renewing the democratic and civic mission of education underlines the importance of integrating Competences for Democratic Culture (CDC) in VET to promote quality citizenship education. This initiative aims to support VET systems in preparing learners not only for employment but also for active participation as citizens in culturally diverse democratic societies. By embedding CDC in learning processes in VET, the Council of Europe aims to ensure that VET learners acquire the necessary knowledge, skills, values and attitudes to participate fully in democratic life.
The Council of Europe Reference Framework for Democratic Culture and the Unesco AI Competence Framework can provide a focus for a wider understanding of AI competences in VET and provide a challenge for how they can be implemented in practice.
Such an understanding can shape an educational landscape that leverages AI while safeguarding human agency, motivation, and ethics. As generative AI advances, continuous dialogue and investigation among all educational stakeholders are essential to ensure these technologies enhance learning outcomes and equip students for an AI-driven future.
Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
UNESCO (2024) AI Competency Framework for Students, https://unesdoc.unesco.org/ark:/48223/pf0000391105
What ever you think about Artificial Intelligence, it cannot be denied that it has generated - and continues to generate - a lot of hype. And in the AI and education field it feels as if the hype is advancing, perhaps because education is seen as massive market for shiny new (and expensive) toys. So I liked this recent post on LinkedIn by Andreu Belsunces Gonçalves about Critical Hype Studies.
A panel at the EASST/4S conference aimed, he says, “at outlining what the field of critical hype studies could be. Although we're still in the early stages, here are some tentative insights from four days of intense, varied, and lengthy discussions about hype, technology, futures, fiction, narratives, rhetorics, and finances in Amsterdam.”
The Politics of Hype is defined as follows: a. Hype is an interface between experts and non-experts, leaving non-experts vulnerable to oversimplified, overpromising expert statements. b. The capacity to produce and disseminate hype is unevenly distributed and generally benefits the already privileged—wealthy, educated, male, Western (i.e., economically powerful countries like the USA or Germany take greater advantage of AI hype).
The post goes on to summarise the conversations and presentations of the two panels reflecting on critical hype studies and hype in the promissory economy.
George Bekiaridis and Graham Attwell have made a keynote presentation to the Second Conference on the Reference Framework of Competences for Democratic Culture and Vocational Education and Training to be held on 24 and 25 October 2024 at the Council of Europe in Strasbourg, France. The event was be dedicated to discussing the chapters of the new publication on the Council of Europe’s Reference Framework of Competences for Democratic Culture (RFCDC) and VET.
In the presentation, Transforming Vocational Training - AI in theory and practice, they introduced ongoing research on using Activity Theory to analyse the impact of AI learning as a result of tool-mediated interactions, showcasing how conceptual frameworks, technologies, practical actions, individuals, and social institutions mutually shape each other in the learning process. They drew attention to the UNESCO Framework for competences in AI for students. which emphasises the importance of competences for citizenship, similar to the work of the Council of Europe's work on Democratic Culture. You can download a copy of the presentation here. It is licensed under a Creative Commons Creative Commons Attribution-NonCommercial 4.0 International License.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.