Generative AI, Assessment and the Future of Jobs and Careers

Ten days ago, I was invited to make an online presentation as part of a series on AI for teachers and researchers in Kazakhstan. I talked with the organisers and they asked me if I could speak about AI and Assessment and AI and Careers. Two subjects seemed hard to me but I prepared presentation linking them together and somehow it made sense. The presentation was using a version of Zoom I had not seen before to enable interpretation. My slides were translated into Russian. This was a little stressful as I was changing the slides in Russian online and in English on a laptop at the same time. It was even more stressful that my TP Link to the internet went down after two minutes and I had to change room to get better connectivity!

Anyway, it seemed to go well and there were good questions from the audience of about 150. Given that the recording was in Russian, I made a new English version. We still experimenting with the best way to do an audio track over slide decks and provide a Spanish translation so sorry that some of these slides are not perfect. But I hope you get the message.

A Compassionate Approach to AI in Education

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

I very much like this blog post, A Compassionate Approach to AI in Education, by Maha Bali from the American University in Cairo. Maha explains where she is coming from. And she addresses ethics, not from the point of an abstract ethical framework, of which we have many at the moment, but from the point of ethical practice. What follows is a summary but please read the whole blog

The article discusses the challenges and opportunities that generative artificial intelligence (AI) presents in education, from the viewpoint of a teacher and researcher deeply involved with educators worldwide through these changes. She emphasises a feminist approach to education, centered on socially just care and compassionate learning design, which critically examines the inequalities and biases exacerbated by AI technologies. The article is structured around four key strategies for educators and learners to adapt and respond to AI's impact:

  1. Critical AI Literacy: Developing an understanding of how AI operates, especially machine learning, is fundamental. Educators and students must grasp how AI outputs are generated, how to judge their quality, and where biases might be embedded. Training data for AI, often dominated by Western, white, and male perspectives, can reinforce existing inequalities, particularly affecting underrepresented groups. The author provides an example where an AI tool incorrectly associated an Egyptian leader with an unrelated American figure, highlighting the importance of recognising biases and inaccuracies. The global South is often underrepresented in training data, and the AI workforce is predominantly male, which can discourage women from pursuing technical skills.
  2. Appropriate AI Usage: While some AI uses have proven beneficial, such as medical diagnostics and accessibility features for visually impaired people, educators must distinguish when its application could be harmful or unethical. AI's biases and limitations mean it should not be relied upon for personalised learning or critical assessments. The EU has identified high-risk AI applications that require careful regulation, including facial recognition and recruitment systems. In educational settings, AI should not replace human judgment in crucial evaluations, and the emotional aspects of learning should not be overlooked.
  3. Inclusive Policy Development: Students should be actively involved in shaping AI policies and guidelines within classrooms and institutions. The author suggests using metaphors to help learners understand when AI is appropriate, comparing it to baking a cake. For instance, sometimes students need to bake a cake from scratch (doing all work without AI), while other times, they can use pre-made mixes (using AI as a starting point) or purchase a cake (fully using AI). By having these discussions, students understand the purpose of assignments and when AI can enhance or detract from learning outcomes.
  4. Preventing Unauthorized AI Use: Understanding why students might be tempted to use AI unethically is critical. Students often misuse AI due to tight deadlines, lack of interest or understanding in assignments, lack of confidence in their abilities, and competitive educational environments. The author advocates for empathetic listening, flexible deadlines, and creative assignments that encourage genuine engagement. Moreover, fostering a supportive classroom community can reduce competitiveness and emphasise collaborative learning over competition.

The article encourages a compassionate, critical approach to AI in education. By understanding the biases embedded in AI, developing critical AI literacy, and involving students in policy-making, educators can ensure that students ethically and effectively use AI tools. This approach aims to empower learners to shape future AI platforms and educational systems that are socially just and inclusive.

Delving into a chat

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Humans Do The Heavy Data Lifting / CC-BY 4.0

GPT 4 is quite useful for some things. I have been developing four Open Educational Resources around Labour Market Information, designed for careers professionals in different European countries. I was asked to include Issues for Reflection and a short multiple choice quiz on each of the OERs. I fed GPT4 the content of each OER and asked for 6 issues for reflection and 6 quiz questions. Fast as a flash they were done and are (in my view) very good. If I had to have done it without the AI it would have taken me at least half a day.

For other things GPT4 is less useful. And I have to say that its English, although grammatically good, is both stilted and plain. It also has the tendency to use somewhat odd English words, which I had always ascribed to it writing American English. But it seems not. In a Guardian newspaper newsletter, Alex Hern reports on work by AI influencer Jeremy Nguyen, at the Swinburne University of Technology in Melbourne, who has highlighted ChatGPT’s tendency to use the word “delve” in responses.

I have to say that I don't think I have ever used delve in anything I have written And talking to my Spanish English speaking friends none of them even new what the work means, Anyway Jeremy Hguyen says no individual use of the word can be definitive proof of AI involvement, but at scale it’s a different story. When half a percent of all articles on research site PubMed contain the word “delve” – 10 to 100 times more than did a few years ago – it’s hard to conclude anything other than an awful lot of medical researchers using the technology to, at best, augment their writing.

And according to a dataset of 50,000 ChatGPT responses, its not the only one. It seems the ten most overused words are: Explore, Captivate, Tapestry, Leverage, Embrace, Resonate, Dynamic, Testament Delve, and Elevate.

Now back to my hypothesis that its the fault of our American cousins. According to Alex Hearn an army of human testers are given access to the raw outputs from Large Language Models like ChatGPT, and instructed to try it out: asking questions, giving instructions and providing feedback. This feedback may be just approving of disapproving the outputs, but can be "more advanced, even amounting to writing a model response for the next step of training to learn from." And, here is the rub: "large AI companies outsource the work to parts of the global south, where anglophonic knowledge workers are cheap to hire."

Now back to the word "Delve."

There’s one part of the internet where “delve” is a much more common word: the African web. In Nigeria, “delve” is much more frequently used in business English than it is in England or the US. So the workers training their systems provided examples of input and output that used the same language, eventually ending up with an AI system that writes slightly like an African.

Exploring the future of learning and the relationship between human intelligence and AI

Jazmin Morris & AI4Media / Better Images of AI / Braided Networks 1 / CC-BY 4.0

I don't normally post links to long videos on this site. But this interview with Mike Paul with Rose Luckin entitled 'Exploring the future of learning and the relationship between human intelligence and AI – An interview with Professor Rose Luckin' is well worth taking some time to watch in full. Professor Rose Luckin is a pioneer in integrating artificial intelligence (AI) with education, and she shares insights on the ethical dimensions of AI deployment in education, emphasizing the importance of ethical AI and its potential to support learner-centered methodologies. She discusses the challenges and opportunities generative AI presents in assessment, learning, and teaching, highlighting the need for robust partnerships between educators and technology developers.

She explores what is human intelligence and what is machine intelligence?

She look at three contexts for AI in education.

The first is AI as tool in teaching and learning, which at the moment mainly focuses on adaptability but which is rapidly extending.

The second is the need to educate people about AI. People, she says need to understand AI and how to live and work alongside AI, how AI can help them and how to keep safe.

The third is the implications for education systems - how is AI changing the skills people need in different occupations and- are education systems optimal for the world we are living in with AI.

She points to the importance of Meta cognition and asks do we do enough for people for changes in the workplace. She suggest we need to adapt education and training systems for life where AI is all around us.

Scenarios of the future of work

Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0

A recent report on analysis by the Institute for Public Policy Research (IPPR) said the UK was facing a “sliding doors” moment around its implementation of generative AI, and called on the UK government to ensure that a fair industrial strategy was in place, according to an IPPR Press release. The IPPR paper, The IPPR paper, Transformed by AI: How generative artificial intelligence could affect work in the UK - and how to manage it, by Carsten Jung and Bhargav Srinvasa Desikan. The report identified two key stages of generative AI adoption: the first wave, which is already under way, and a second wave in which companies will more deeply integrate AI tech into their processes - a stage at which it suggests as many as 59 per cent of tasks done by workers could be vulnerable to being replaced by AI automation if no intervention occurs.It said that back office, entry level and part-time jobs were at the highest risk of being disrupted during the first wave - including secretarial, customer service and administrative roles - with women and young people the most likely to be affected as they are more likely to be in those roles. Those on lower wages were also identified as being the most exposed to being replaced by AI.

The study’s worst case scenario for the second wave of AI would be around 7.9 million job losses and no gross domestic product gains (GDP). However, the report suggests that if government and industry are proactive in protecting workers as the use of AI increases, there could be substantial economic benefits.

Its best case scenario for the second wave said no jobs would be lost as they are augmented to work alongside AI, which it claimed could lead to an economic boost of 13 per cent to GDP, around £306 billion (US$386 billion) a year.

IPPR also says that employment of AI could also free up labour to fill gaps related to unaddressed social needs. For instance, workers could be re-allocated to social care and mental health services which are currently under-resourced. But they she the modelling shows that there is no single predetermined path for how AI implementation will play out in the labour market. It also urges intervention to ensure that the economic gains are widely spread, rather than accruing to only a few.

Although the research for this report was undertaken in the UK, It seems likely that the different scenarios, while varying in quantity and impact, will also apply in many other countries.

The IPPR press release can be accessed here and the full report can be downloaded from the IPPR website.