Good critical and sceptical work on AI in education

I've commented before on the depth of division in commentary and research on the use of AI in education since the release of ChatGPT and subsequent applications based on Large Language Models. As the MIT Technology review has reported, "Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network" and "by January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia." But then continued, "many teachers now believe, ChatGPT could actually help make education better.

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more."

But rather than take sides in a polarised debate. Ben Williamson, who researches and writes about education, digital tech, data and policy at the University of Edinburgh, believes we need to develop "Good critical and sceptical work on AI in education.' In a series of toots (the Mastodon nomenclature for Tweets) on the Mastodon social network put forward the following ideas for research into AI in education.

  1. Is AI in education really doing what it claims? Do LLM-enabled chatbots improve learning? Do personalized learning algorithms actually personalize, or just cluster by historical patterns? Is it even "AI" or just some shitty stats?
  2. What's the political economy of AI in education? Even if LLM chatbots in EdTech are great, how does that link with wider digital economy developments? What policy enablers are in place to facilitate AI in education? What policy-influencing networks are forming around AIED? Why does it get so much funding, in which geographical regions, and from which sources?
  3. What's the science behind AI in education? AI and education have a 60-year history, taking in cybernetics, cognitivism and computing, then learning science, learning analytics, and education data science, with doses of behaviourism and nudge theory along the way, and now machine learning and neural networks - this is a hefty accumulation demanding much better understanding.
  4. What kind of infrastructuring of education does AI in education require? You put LLMs into EdTech vis APIs then you are building on an infrastructure stack to run your platform. That puts schools on the stack too. What are the implications, long-term, of these Big Tech lock-ins? Will schools be governed not just by EdTech but by Big Tech AI vendors and their APIs?
  5. What are the rights, justice, ethics and regulatory implications of AI in education? Can EdTech be designed for justice? Could algorithms be repurposed for reparative projects rather than discriminatory outcomes? Have AIED ethics frameworks been compromised? Is there scope for more democratic participation in building AI for education products? Can we be hopeful of better things from this technically remarkable but socially troubling tech?

"Just some thoughts to work on…", he concluded. These seem a pretty good starting point, not just for Higher Education, but for those of working on AI and Vocational Education and Training and in Adult Education, as we are doing in the European AI PIoneers Project.

AI and Human Roles: codified and tacit knowledge

This is an interesting diagram, from a publication, Artificial intelligence and knowledge management: A partnership between human and AI, by Mohammad Hossein Jarrahi, David Askay, Ali Eshraghi and Preston Smith. I picked it ip from Juan Domingo Farnos's Facebook account.

What I like is that although talking about AI, it is building on older debates around knowledge development and particularly on 'Know why' and on tacit knowledge transferred through social interactions. In the past tacit knowledge was seen as important for sharing and developing new knowledge - in organisations and between organisation. And while AI is great for codifying knowledge, it would seem unlikely that it is going to develop tacit knowledge any time soon. So in terms of human roles and AI roles in knowledge it becomes a question of how tacit knowledge will become codified through working with AI.

HM - need to think a bit more about this.

More on Generative AI and education

It is hard to keep up with the avalanche of talks, posts, reports and so on about AI and education, sparked by Open AI's release of Chat GPT and then the many tools which have followed. Talking with teachers in different countries in Europe, I am impressed how many seem to have just quietly got on with it, accepting that AI is there and it is important that their students know how to use it properly and sensibly. Having said that, in Italy Chat GPT remains banned, as it is viewed by the government as being in conflict with the General Data Protection Regulation (GDPR).

The big problem area for institutions is assessment. Joe Wilson's opening speech at the City Of Glasgow College's teaching and learning conference yesterday. Joe Wilson is Head of Digital Skills and his presentation was entitled the 'March of Artificial Intelligence from Tinder to Training in 30 minutes.' The key take aways from his presentation were to:

1. Make you aware of rise of artificial intelligence and implications for education and assessment.

2. Make you aware of a range of tools you can use in your own practice

3. Consider how you should introduce AI to your learners to allow them to use it ethically 

4. Reflect on what it means for policy makers.

Talking about assessment (which he approached as part of professional practice) he said

1 .Ideally make assessment a demonstration of competence.

2. Require personal reflection and insights. 

3. Require that notes and drafts are submitted with the final work. - know your learner’s writing style

He suggested promoting Portfolios and blogs and eliciting reports on specific activities (How I did/achieved this) as well as creating assessments that require Video or oral assessments and seting tasks that require analysis of charts, images, or videos.

All of which would seem a good idea to me, regardless of Generative AI.

You can see the full presentation on Google Docs

 

AI Energy: a Vocational school project in Germany

In the work we have been doing over the past three years around the use of Artificial Intelligence in Vocational Education and Training, one of the most frequent requests from teachers and trainers has been for examples of how people are doing this. We are picking up on this under the new Erasmus + Large scale project - AI Pioneers. And my colleague Ludger Deitmer, from the ITB at the University of Bremen, is doing a great job funding examples of teaching with and about AI in the German vocational schools and seeking videos and other materials about what schools are doing. By no means every vocational school is using AI, but there seems to be a growing number developing projects and experiments, especially reflecting on how AI is going to change the nature of work in different occupations.

The video above (in English) is from the Berufsbildende Schulen 2 in Wolfsburg who have developed a project on AI and energy. They say:

Energy saving and green energy is the most important topic for all of us to survive on our wonderful planet earth.

We see many opportunities to reach this goal all together.

We would like to show that energy saving does not cost money – it will payback after a few years.We want to combine artificial with human intelligence to solve this challenge.