A Game Changer for Education?

Amritha R Warrier & AI4Media / Better Images of AI / error cannot generate / CC-BY 4.0

Open AI launched its latest product – GPT 4o – yesterday. Its difficult to tell from a demo but it seems to be faster model of GPT4 with new audio capability, improved quality and speed of ChatGPT’s international language capabilities, and an ability to upload images, audio and text documents for the model to analyze.

It may have much more capability as a tutor – or more likely as a personal research assistant. As MIT Technology Review says the big pictureis, the company’s demonstration suggests, “a conversational assistant much in the vein of Siri or Alexa—but capable of fielding much more complex prompts.” But none of this is game changing. What is new is the business model. Although the increasingly outdated ChatGPT, based on GPT3.5 is free to users, ChatGPT4 which is the basis for the new model, costs 20 Euro a month. Now this is being provided for free. And for education which is concerned with access and equity allowing all to participate free use is a game changer.

Of course we have to wait to try it out. And there are still issues about the accuracy of what it returns. I enjoyed this this “hallucination” from Benjamin Riley quoted by Gary Marcus in his newsletter, Marcus on AI, this morning.

Generative AI, Assessment and the Future of Jobs and Careers

Ten days ago, I was invited to make an online presentation as part of a series on AI for teachers and researchers in Kazakhstan. I talked with the organisers and they asked me if I could speak about AI and Assessment and AI and Careers. Two subjects seemed hard to me but I prepared presentation linking them together and somehow it made sense. The presentation was using a version of Zoom I had not seen before to enable interpretation. My slides were translated into Russian. This was a little stressful as I was changing the slides in Russian online and in English on a laptop at the same time. It was even more stressful that my TP Link to the internet went down after two minutes and I had to change room to get better connectivity!

Anyway, it seemed to go well and there were good questions from the audience of about 150. Given that the recording was in Russian, I made a new English version. We still experimenting with the best way to do an audio track over slide decks and provide a Spanish translation so sorry that some of these slides are not perfect. But I hope you get the message.

A Compassionate Approach to AI in Education

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

I very much like this blog post, A Compassionate Approach to AI in Education, by Maha Bali from the American University in Cairo. Maha explains where she is coming from. And she addresses ethics, not from the point of an abstract ethical framework, of which we have many at the moment, but from the point of ethical practice. What follows is a summary but please read the whole blog

The article discusses the challenges and opportunities that generative artificial intelligence (AI) presents in education, from the viewpoint of a teacher and researcher deeply involved with educators worldwide through these changes. She emphasises a feminist approach to education, centered on socially just care and compassionate learning design, which critically examines the inequalities and biases exacerbated by AI technologies. The article is structured around four key strategies for educators and learners to adapt and respond to AI's impact:

  1. Critical AI Literacy: Developing an understanding of how AI operates, especially machine learning, is fundamental. Educators and students must grasp how AI outputs are generated, how to judge their quality, and where biases might be embedded. Training data for AI, often dominated by Western, white, and male perspectives, can reinforce existing inequalities, particularly affecting underrepresented groups. The author provides an example where an AI tool incorrectly associated an Egyptian leader with an unrelated American figure, highlighting the importance of recognising biases and inaccuracies. The global South is often underrepresented in training data, and the AI workforce is predominantly male, which can discourage women from pursuing technical skills.
  2. Appropriate AI Usage: While some AI uses have proven beneficial, such as medical diagnostics and accessibility features for visually impaired people, educators must distinguish when its application could be harmful or unethical. AI's biases and limitations mean it should not be relied upon for personalised learning or critical assessments. The EU has identified high-risk AI applications that require careful regulation, including facial recognition and recruitment systems. In educational settings, AI should not replace human judgment in crucial evaluations, and the emotional aspects of learning should not be overlooked.
  3. Inclusive Policy Development: Students should be actively involved in shaping AI policies and guidelines within classrooms and institutions. The author suggests using metaphors to help learners understand when AI is appropriate, comparing it to baking a cake. For instance, sometimes students need to bake a cake from scratch (doing all work without AI), while other times, they can use pre-made mixes (using AI as a starting point) or purchase a cake (fully using AI). By having these discussions, students understand the purpose of assignments and when AI can enhance or detract from learning outcomes.
  4. Preventing Unauthorized AI Use: Understanding why students might be tempted to use AI unethically is critical. Students often misuse AI due to tight deadlines, lack of interest or understanding in assignments, lack of confidence in their abilities, and competitive educational environments. The author advocates for empathetic listening, flexible deadlines, and creative assignments that encourage genuine engagement. Moreover, fostering a supportive classroom community can reduce competitiveness and emphasise collaborative learning over competition.

The article encourages a compassionate, critical approach to AI in education. By understanding the biases embedded in AI, developing critical AI literacy, and involving students in policy-making, educators can ensure that students ethically and effectively use AI tools. This approach aims to empower learners to shape future AI platforms and educational systems that are socially just and inclusive.

Delving into a chat

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Humans Do The Heavy Data Lifting / CC-BY 4.0

GPT 4 is quite useful for some things. I have been developing four Open Educational Resources around Labour Market Information, designed for careers professionals in different European countries. I was asked to include Issues for Reflection and a short multiple choice quiz on each of the OERs. I fed GPT4 the content of each OER and asked for 6 issues for reflection and 6 quiz questions. Fast as a flash they were done and are (in my view) very good. If I had to have done it without the AI it would have taken me at least half a day.

For other things GPT4 is less useful. And I have to say that its English, although grammatically good, is both stilted and plain. It also has the tendency to use somewhat odd English words, which I had always ascribed to it writing American English. But it seems not. In a Guardian newspaper newsletter, Alex Hern reports on work by AI influencer Jeremy Nguyen, at the Swinburne University of Technology in Melbourne, who has highlighted ChatGPT’s tendency to use the word “delve” in responses.

I have to say that I don't think I have ever used delve in anything I have written And talking to my Spanish English speaking friends none of them even new what the work means, Anyway Jeremy Hguyen says no individual use of the word can be definitive proof of AI involvement, but at scale it’s a different story. When half a percent of all articles on research site PubMed contain the word “delve” – 10 to 100 times more than did a few years ago – it’s hard to conclude anything other than an awful lot of medical researchers using the technology to, at best, augment their writing.

And according to a dataset of 50,000 ChatGPT responses, its not the only one. It seems the ten most overused words are: Explore, Captivate, Tapestry, Leverage, Embrace, Resonate, Dynamic, Testament Delve, and Elevate.

Now back to my hypothesis that its the fault of our American cousins. According to Alex Hearn an army of human testers are given access to the raw outputs from Large Language Models like ChatGPT, and instructed to try it out: asking questions, giving instructions and providing feedback. This feedback may be just approving of disapproving the outputs, but can be "more advanced, even amounting to writing a model response for the next step of training to learn from." And, here is the rub: "large AI companies outsource the work to parts of the global south, where anglophonic knowledge workers are cheap to hire."

Now back to the word "Delve."

There’s one part of the internet where “delve” is a much more common word: the African web. In Nigeria, “delve” is much more frequently used in business English than it is in England or the US. So the workers training their systems provided examples of input and output that used the same language, eventually ending up with an AI system that writes slightly like an African.

Exploring the future of learning and the relationship between human intelligence and AI

Jazmin Morris & AI4Media / Better Images of AI / Braided Networks 1 / CC-BY 4.0

I don't normally post links to long videos on this site. But this interview with Mike Paul with Rose Luckin entitled 'Exploring the future of learning and the relationship between human intelligence and AI – An interview with Professor Rose Luckin' is well worth taking some time to watch in full. Professor Rose Luckin is a pioneer in integrating artificial intelligence (AI) with education, and she shares insights on the ethical dimensions of AI deployment in education, emphasizing the importance of ethical AI and its potential to support learner-centered methodologies. She discusses the challenges and opportunities generative AI presents in assessment, learning, and teaching, highlighting the need for robust partnerships between educators and technology developers.

She explores what is human intelligence and what is machine intelligence?

She look at three contexts for AI in education.

The first is AI as tool in teaching and learning, which at the moment mainly focuses on adaptability but which is rapidly extending.

The second is the need to educate people about AI. People, she says need to understand AI and how to live and work alongside AI, how AI can help them and how to keep safe.

The third is the implications for education systems - how is AI changing the skills people need in different occupations and- are education systems optimal for the world we are living in with AI.

She points to the importance of Meta cognition and asks do we do enough for people for changes in the workplace. She suggest we need to adapt education and training systems for life where AI is all around us.