Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”

Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”

AI and Ed: pitfalls but encouraging signs

Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0

In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."

He summarised five major AI pitfalls for education:

  1. Fictional hype on AI’s potentials in addressing real-world challenges
  2. Machine-centrism prevailing over human-centrism and machine agency undermining human agency
  3. Sidelining AI’s harmful impact on environment and ecosystems
  4. Covering up on the AI-driven wealth concentration and widened social inequality
  5. Downgrading AI competencies to operational skills bound to commercial AI platforms

UNESCO has published five guiding principles in their AI competency framework for students:
2.1 Fostering critical thinking on the proportionality of AI for real-world challenges
2.2 Prioritizing competencies for human-centred interaction with AI
2.3 Steering the design and use of more climate-friendly AI
2.4 Promoting inclusivity in AI competency development
2.5 Facilitating transferable AI foundations for lifelong learning

And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.

For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.

That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.

AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:

  • Do so in almost any language of their choice
  • Learn anything, anytime, anywhere on mobile devices or computers
  • Engage in dynamic, open-ended conversations through interactive dialogue
  • Check their knowledge and skills on any topic 
  • Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.

And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.

I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.

AI Competency Framework for teachers

At last week's Digital Learning Week 2024, UNESCO formally launched two AI Competence Frameworks, one for teachers and the other for students. These frameworks aim to guide countries in supporting students and teachers to understand the potential as well as risks of AI in order to engage with it in a safe, ethical and responsible manner in education and beyond.

Above is a copy of Tim Evans popular poster  summarizing the AI Competency Framework for Teachers. He says "I've taken the extensive, lengthy report and attempted to gather my take on the 10 key points, and areas of focus." Tim has also made a copy of the poster available on Canva.

AI: What do teachers want?

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI / CC-BY 4.0

A quick post in follow up to my article yesterday on the proposals by the UK Department for Education to commission tech companies to develop an AI app for teachers to save them time. The Algorithm - a newsletter from MIT Technology Review picked up on this today, saying "this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver."

The article goes on to ask how willing teachers are to turn over some of their responsibilities to an AI model? The answer, they say, really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.

“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”

Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans that conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability. 

Companies promising time savings for planning and grading “is a huge red flag, because those are core parts of the profession,” he says. “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback for creative skills like writing is controversial too. “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.”