Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”

Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”

AI and Ed: pitfalls but encouraging signs

Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0

In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."

He summarised five major AI pitfalls for education:

  1. Fictional hype on AI’s potentials in addressing real-world challenges
  2. Machine-centrism prevailing over human-centrism and machine agency undermining human agency
  3. Sidelining AI’s harmful impact on environment and ecosystems
  4. Covering up on the AI-driven wealth concentration and widened social inequality
  5. Downgrading AI competencies to operational skills bound to commercial AI platforms

UNESCO has published five guiding principles in their AI competency framework for students:
2.1 Fostering critical thinking on the proportionality of AI for real-world challenges
2.2 Prioritizing competencies for human-centred interaction with AI
2.3 Steering the design and use of more climate-friendly AI
2.4 Promoting inclusivity in AI competency development
2.5 Facilitating transferable AI foundations for lifelong learning

And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.

For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.

That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.

AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:

  • Do so in almost any language of their choice
  • Learn anything, anytime, anywhere on mobile devices or computers
  • Engage in dynamic, open-ended conversations through interactive dialogue
  • Check their knowledge and skills on any topic 
  • Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.

And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.

I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.

AI in Education – the question of hype and reality

Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

I have spent a lot of time over the past two weeks working on the first year evaluation report for the AI pioneers Web site. Evaluation seems to come in and out of fashion on European Commission funded projects. And now its in an upswing, partly due to the move to funding projects based on the products and results, rather than number of working days claimed.

For the AI Pioneers project, I have adopted a Participant oriented approach to the evaluation. This puts the needs of project participants as its starting point. Participants are not seen as simply the direct target group of of the project but will includes other stakeholders and potential beneficiaries. Participant-orientated evaluation looks for patterns in the data as the evaluation progresses and data is gathered in a variety of ways, using a range of techniques and culled from many different sources. Understandings grow from observation and bottom up investigation rather than rational deductive processes. The evaluator’s key role is to represent multiple realities and values rather than singular perspectives.

Hopefully we will be able to publish the Evaluation report in the early new year. But here are a few take aways, mainly gleaned from interview I undertook with each of the project partners.

The partners have a high level of commitment to the project. However the work they are able to undertake, depends to a large extent on their role in their organisations and the organisations role in the wider are of education. Pretty much all of the project partners, and I certainly concur with this sentiment, feel overwhelmed by the sheer volume of reports and discourse around AI in education and the speed of development especially around generative AI, makes it difficult to stay up to date. All the partners are using AI to some extent. Having said that there is a tendency to thing of Generative AI as being AI as a whole, and to forget about the many uses of AI which are not based on Large Language Models.

Despite the hype (as John Naughton in his Memex 1.1 newsletter pointed out this week AI is at the peak of the Gartner hype cycle, see illustration) finding actual examples of the use of AI in education and training and Adult Education is not easy.

A survey undertaken by the AI pioneers project found few vcoati0nal education and train9ing organisations in Europe had a policy on AI. There is considerable differences between different sectors - Marketing, Graphic design, computing, robotics and healthcare appear to be ahead but in many subjects and many institutions there is little actual movement in incorporating AI, either as a subject or for teaching and learning. And indeed, where there are initial projects taking place, this is often driven by enthusiastic individuals, with or without the knowledge of their managers.

This finding chimes with reports from other perspect6ives. Donald H Taylor and Egle Vinauskaite have produced a report looking at how AI is being used in workplace Learning and Development today, and concludes that it is in its infancy. "Of course, some extraordinary things are being done with AI within L&D," they say. "But our research suggests that where AI is currently being used by L&D, it is largely for the routine tasks of content creation and increased efficiency."

If there is one message L&D practitioners should take away from this report, it is that there is no need to panic – you are not falling far behind your peers, for the simple reason that very few are making major strides with AI. There is every need, however, to act, if only in a small way, to familiarize yourself with what AI has to offer.

Donald H Taylor and Egle Vinauskaite. Focus on AI in L&D, https://donaldhtaylor.co.uk/research_base/focus-on-ai-in-ld/

Indeed, for all the talk of the digital transformation in education and training, it my be that education, and certainly higher education, is remarkably resistant to the much vaunted hype of disruption and that even though AI will have a major impact it may be slower than predicted.