AI and Ed: pitfalls but encouraging signs

Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0

In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."

He summarised five major AI pitfalls for education:

  1. Fictional hype on AI’s potentials in addressing real-world challenges
  2. Machine-centrism prevailing over human-centrism and machine agency undermining human agency
  3. Sidelining AI’s harmful impact on environment and ecosystems
  4. Covering up on the AI-driven wealth concentration and widened social inequality
  5. Downgrading AI competencies to operational skills bound to commercial AI platforms

UNESCO has published five guiding principles in their AI competency framework for students:
2.1 Fostering critical thinking on the proportionality of AI for real-world challenges
2.2 Prioritizing competencies for human-centred interaction with AI
2.3 Steering the design and use of more climate-friendly AI
2.4 Promoting inclusivity in AI competency development
2.5 Facilitating transferable AI foundations for lifelong learning

And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.

For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.

That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.

AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:

  • Do so in almost any language of their choice
  • Learn anything, anytime, anywhere on mobile devices or computers
  • Engage in dynamic, open-ended conversations through interactive dialogue
  • Check their knowledge and skills on any topic 
  • Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.

And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.

I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.

AI in ED: Equity, XAI and learner agency

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The AI in education theme continues to gather momentum, resulting in a non stop stream of journal; articles, reports, newsletters. blogs and videos. However, while not diminishing, there seem to be some subtle change in directions in the messages.

Firstly, despite many schools wary of Generative AI, there is a growing realisation that students are going to use it anyway and that the various apps claiming to check student work for AI simply don't work.

At the same time, there is an increasing focus on AI and pedagogy (perhaps linked to the increasing sophistication of Frontier Models from Gen AI but also the realisation that gimmicks like talking to an AI pretending to be someone famous from the past are just lame!). This increased focus on pedagogy is also leading to pressure to involve students. in the application of Gen AI for teaching and learning. And at recent students two ethical questions have emerged. The first is unequal access to AI applications and tools. Inside Higher Ed reports that recent research from the Public Policy Institute of California on disparate access to digital devices and the internet for K-12 students in the nation’s largest state public-school system. Put simply, they say. students who are already at an educational and digital disadvantage because of family income and first-generation constraints are becoming even more so every day as their peers embrace AI at high rates as a productivity tool—and they do not.

And while some tools will remain free, it appears that the most powerful and modern tools will increasingly come at a cost. The U.K. Jisc recently reported that access to a full suite of the most popular generative AI tools and education plug-ins currently available could cost about £1,000 (about $1,275) per year. For many students already accumulating student debt and managing the rising cost of living, paying more than $100 per month for competitive AI tools is simply not viable.

A second issue is motivation and agency for students in using AI tools. It may be that the rush to gamification, inspired by Apps like DuoLingo, is running thin. Perhaps a more subtle and sustained approach is needed to motivate learners. That may increase a focus on learner agency which in turn is being seen as linked to Explainable AI (or XAI for short). Research around Learning Analytics has pointed to the importance of students understanding the use purpose of LA but also being able to understand why the Analytics turns out as it does. And research into Personal Learning Environments has long shown the importance of learner agency in developing meta-cognition in learning. With the development of many applications for personalized learning programmes, it becomes important that learners are able to understand the reasons for their own individual learning pathways and if necessary challenge them.

While earlier debates about AI in Ed ethics, largely focused on technologies, the new debates are more focused on practices in teaching and learning using AI.

AI in Education – the question of hype and reality

Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

I have spent a lot of time over the past two weeks working on the first year evaluation report for the AI pioneers Web site. Evaluation seems to come in and out of fashion on European Commission funded projects. And now its in an upswing, partly due to the move to funding projects based on the products and results, rather than number of working days claimed.

For the AI Pioneers project, I have adopted a Participant oriented approach to the evaluation. This puts the needs of project participants as its starting point. Participants are not seen as simply the direct target group of of the project but will includes other stakeholders and potential beneficiaries. Participant-orientated evaluation looks for patterns in the data as the evaluation progresses and data is gathered in a variety of ways, using a range of techniques and culled from many different sources. Understandings grow from observation and bottom up investigation rather than rational deductive processes. The evaluator’s key role is to represent multiple realities and values rather than singular perspectives.

Hopefully we will be able to publish the Evaluation report in the early new year. But here are a few take aways, mainly gleaned from interview I undertook with each of the project partners.

The partners have a high level of commitment to the project. However the work they are able to undertake, depends to a large extent on their role in their organisations and the organisations role in the wider are of education. Pretty much all of the project partners, and I certainly concur with this sentiment, feel overwhelmed by the sheer volume of reports and discourse around AI in education and the speed of development especially around generative AI, makes it difficult to stay up to date. All the partners are using AI to some extent. Having said that there is a tendency to thing of Generative AI as being AI as a whole, and to forget about the many uses of AI which are not based on Large Language Models.

Despite the hype (as John Naughton in his Memex 1.1 newsletter pointed out this week AI is at the peak of the Gartner hype cycle, see illustration) finding actual examples of the use of AI in education and training and Adult Education is not easy.

A survey undertaken by the AI pioneers project found few vcoati0nal education and train9ing organisations in Europe had a policy on AI. There is considerable differences between different sectors - Marketing, Graphic design, computing, robotics and healthcare appear to be ahead but in many subjects and many institutions there is little actual movement in incorporating AI, either as a subject or for teaching and learning. And indeed, where there are initial projects taking place, this is often driven by enthusiastic individuals, with or without the knowledge of their managers.

This finding chimes with reports from other perspect6ives. Donald H Taylor and Egle Vinauskaite have produced a report looking at how AI is being used in workplace Learning and Development today, and concludes that it is in its infancy. "Of course, some extraordinary things are being done with AI within L&D," they say. "But our research suggests that where AI is currently being used by L&D, it is largely for the routine tasks of content creation and increased efficiency."

If there is one message L&D practitioners should take away from this report, it is that there is no need to panic – you are not falling far behind your peers, for the simple reason that very few are making major strides with AI. There is every need, however, to act, if only in a small way, to familiarize yourself with what AI has to offer.

Donald H Taylor and Egle Vinauskaite. Focus on AI in L&D, https://donaldhtaylor.co.uk/research_base/focus-on-ai-in-ld/

Indeed, for all the talk of the digital transformation in education and training, it my be that education, and certainly higher education, is remarkably resistant to the much vaunted hype of disruption and that even though AI will have a major impact it may be slower than predicted.

Generative AI for teaching and learning

I missed this when it was published in April. But this table, is in a Quickstart Guide to ChatGPT by UNESCO which "provides an overview of how ChatGPT works and explains how it can be used in higher education. The Quick Start Guide raises some of the main challenges and ethical implications of AI in higher education and offers practical steps that higher education institutions can take." The table provides a useful summary of the different pedagogical possibilities fo using Generative AI for teaching and learning.