Explainable AI

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The European Digital Education Hub has put out a call for members for a working group (they call them "squads") on Explainable AI. They say:

As AI systems become increasingly influential in shaping teaching, educational outcomes and assessment, the demand for transparency and accountability in these systems has grown. Explainable AI aims to bridge the gap between complex AI algorithms on the one side and educators, learners, and administrators on the other by providing clear insights into how AI systems arrive at their conclusions.

This transparency is crucial as it fosters trust among users, who can see and understand the rationale behind AI-driven decisions, recommendations and actions. It also empowers educators to make informed decisions about integrating AI tools into their teaching strategies. Additionally, it ensures that AI systems uphold ethical standards, mitigating potential biases and promoting fairness in educational assessments.

I like the idea and have put in my application (membership is unpaid). But I got to thinking that we should pick up this in the AI Pioneers project. My idea is to try to write one 'explanation' about AI a week and also to publish it as a TIC TOC video. And of course I would love to involve AI Pioneer members in the whole process. As an aside we will be unveiling more interactive tools and activities for members of the network in the new few weeks.

But here for starters is to decide what topics related to AI and education need explaining. Here is a few I jotted down on a Google doc one evening (after, I have to admit, a couple of cooling glasses of white wine). What do you think and what have I missed out? Add your ideas on the Google doc here.

AI and Assessment

Image by Mohamed Hassan from Pixabay

Maybe the panic over the impact of AI on assessment in education has died down a little, but it has been useful in that it has focused attention on the puropse of assessment and the pedagogic approaches to assessment. Simon Brookes, Executive Dean, Faculty of Creative & Cultural Industries, at the University of Portsmouth in the UK has started a new blog series on Rethinking Assessment in the Age of AI. His latest post features insights from the University of Melbourne's Centre for the Study of Higher Education. Their recent guide, "Rethinking Assessment in Response to AI" (pdf) offers a thoughtful approach to redesigning assessments that maintain academic integrity without sacrificing pedagogical value, he says.

The guide includes seven critical strategies for improving assessment design and integrity:

1. Shift from product to process: Focus on evaluating students' thinking processes and problem-solving approaches rather than just the final output. This could involve asking students to maintain learning journals, document their research process, or explain their reasoning in solving problems.

2. Incorporate evaluative judgement tasks: Ask students to review or evaluate work against set criteria, encouraging higher-order thinking skills. This might include peer review exercises, critiquing published works, or assessing case studies against industry standards.

3. Design nested or staged assessments: Create assignments that build on each other throughout the semester, allowing for feedback and adaptation. For example, a research project could be broken down into proposal, literature review, draft, and final submission stages, each informing the next.

4. Diversify assessment formats: Use various modalities, such as videos, blogs, podcasts, and animations, which are less susceptible to AI generation. This not only makes cheating more difficult but also allows students to develop a broader range of communication skills.

5. Create authentic, context-specific assignments: Design tasks that mirror real-world scenarios or are highly specific to the subject matter. This could involve analysing local case studies, solving problems specific to your discipline, or applying theories to current events.

6. Include more in-class and group assignments: Incorporate collaborative learning and reduce opportunities for individual cheating. This might involve group presentations, debates, or problem-solving sessions during class.

7. Use oral interviews: Test understanding through verbal responses to unpredictable prompts, making it difficult to use AI. This could range from viva voce examinations to informal discussions about a student's work process.

How are jobs sensitive to AI doing: an update

Nacho Kamenov & Humans in the Loop / Better Images of AI / Data annotators discussing the correct labeling of a dataset / CC-BY 4.0

‭Jeisson Cardenas-Rubio and Gianni Anelli-Lopez from the University of Warwick Institute for Employment Research have posted an interesting blog on the LMi for All website. They have been using data from scraped job adverts to assess the impact of generative AI on employment in the UK. In their first report, some six months ago and looking at data until mid 2022, they looked at the impact on 15 jobs identified by Eloundou, (2023) as vulnerable to Chat GPT. In their initial research they found that "although the fear of AI-induced job displacement is understandable, the current evidence from the UK suggests that AI tools such as Chat-GPT are not yet leading to job losses. The initial findings from the United Kingdom (UK) indicate that, following the launch of Chat-GPT, there have been no sizable changes in the labour market trends, particularly for jobs deemed susceptible to these type of AI tools."

The follow up research suggests some change. In the period to December 2023, the data

begins to reveal a subtle yet persistent decline in the share of online job advertisements (OJAs) for jobs considered to be sensitive to the diffusion of GPTs (i.e. jobs where tasks can be automated or augmented by the widespread adoption and integration of GPTs).

They question whether the modest negative trajectory will continue, stabilise at its current levels, or whether there will be a resurgence as the market adapts and finds new equilibrium with AI technologies?

In conclusion they say:

Incorporating the insights of Acemoglu, Autor, and Johnson (2023), we recognise that selecting a path where technology complements human skills is possible but demands shifts in technological innovation, corporate norms, and behaviours. The goal is to use generative AI to develop new tasks and enhance capabilities across various professions, including teaching, nursing, and technical trades. This approach can help reduce inequality, increase productivity, and elevate wages by enhancing the skill level and expertise of workers.

References

Acemoglu, D. et al. (2023) Can we Have Pro-Worker AI? Choosing a path of machines in service of minds. Centre for Economic Policy Research. Available at: https://cepr.org/system/files/publication-files/191183-policy_insight_123_can_we_have_pro_worker_ai_choosing_a_path_of_machines_in_service_of_minds.pdf

Eloundou, T. et al. (2023) ‘GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models’. Available at: http://arxiv.org/abs/2303.10130.

Appendix

This is the fifteen occupations identified by Eloundou as sensitive to AI. The numbers refer to the UK Standard Occupational Classification: it would be interesting to hear from any similar work undertaken in other European countries.

  1. Survey Researchers – Business and related research professionals (UK SOC code 2434)
  2. Animal Scientists – Biological scientists (2112)
  3. Climate Change Policy Analysts – Social and humanities scientists (2115)
  4. Blockchain Engineers – Programmers and software development professional (2134)
  5. Web and Digital Interface Designers – Web design professionals (2141)
  6. Financial Quantitative Analysts – Finance and investment analysts and advisers (2422)
  7. Tax Preparers – Taxation experts (2423)
  8. Mathematicians – Actuaries, economists and statisticians (2433)
  9. News Analysts, Reporters, and Journalists – Newspaper and periodical journalists and reporters (2492)
  10. Public Relations Specialist – Public relations professionals (2493)
  11. Proofreaders and Copy Markers – Authors, writers and translators (3412)
  12. Accountants and Auditors – Book-keepers, payroll managers and wages clerks (4122)
  13. Correspondence Clerks – Records clerks and assistants (4131)
  14. Clinical Data Managers – Database administrators and web content technicians (3133)
  15. Court Reporters and Simultaneous Captioners – Typists and related keyboard occupations (4217).

(Eloundou T. et al. (2023) ‘GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models’. Available at: http://arxiv.org/abs/2303.10130.

The Digital Native Myth: A Story of Evolution

Remember when people started talking about "digital natives" back in 2001? It was a catchy term for kids growing up surrounded by tech and the internet. The specific terms "digital native" and "digital immigrant" were popularized by education consultant Marc Prensky in his 2001 article entitled Digital Natives, Digital Immigrants, in which he relates the contemporary decline in American education to educators' failure to understand the needs of modern students. His article posited that "the arrival and rapid dissemination of digital technology in the last decade of the 20th century" had changed the way students think and process information, making it difficult for them to excel academically using the outdated teaching methods of the day. Prensky's article was not scientific and there was no research or evidence to back up his idea. But despite this, the idea caught on fast, influencing how we approached education and technology.

Researchers dug deeper and found no real evidence that an entire generation was thinking differently. You'd think that would be the end of it, right? Surprisingly, the digital native idea is still kicking around in the media and education circles. Yet, the digital natives narrative persists in popular media and the education discourse. A new study set out to investigate the reasons for the persistence of the digital native myth. It analyzed the metadata from 1886 articles related to the term between 2001 and 2022 using bibliometric methods and structural topic modeling. The results show that the concept of “digital native” is still both warmly embraced and fiercely criticized by scholars mostly from western and high income countries, and the volume of research on the topic is growing. However, interestingly the results suggest that what appears as the persistence of the idea is actually evolution and complete reinvention: The way the “digital native” concept is operationalized has shifted over time through a series of (metaphorical) mutations. The concept of digital native is one (albeit a highly successful) mutation of the generational gap discourse dating back to the early 1900s. While the initial digital native literature relied on Prensky's unvalidated claims and waned upon facing empirical challenges, subsequent versions have sought more nuanced interpretations.

The study uncovered 1,886 articles about digital natives, published between 2001 and 2022 with some interesting patterns. The authors say that what we mean by "digital native" has shifted over time. The idea is part of a bigger story and is just one chapter in a long history of talking about generational gaps. Its not going to be long before the idea mutates for those growing up in the age of AI!

Want to find out more? Listen to the podcast above or if you prefer your learning in written form download the paper below.

Mertala, P., López-Pernas, S., Vartiainen, H., Saqr, M., & Tedre, M. (2024). Digital natives in the scientific literature: A topic modelling approach. Computers in Human Behavior, 152, 108076.

AI in Education is not new!

AI in education is hardly new. I have been working with AI in Vocational Education and Training and careers guidance for something like 8 years now but it has been around way before that. The issue seems to be that not many people really noticed or were that interested until Chat GPT came out in November 2022 with its seemingly magic online typewriter churning out text in response to written prompts. But especially now with increasing realisation of the limits and issues confronting Generative AI and Large Language models, perhaps it is time to look in a bit more detail at the different uses of AI in education.

As of January 2024, Duolingo was the world's most popular language learning app based on monthly downloads, with around 16.2 million users downloading it that month.

Duolingo is an app and website that uses a gamified approach to language learning, with lessons that incorporate translating, interactive exercises, quizzes, and stories. It also uses an algorithm that adapts to each learner and can provide personalized feedback and recommendations.

Duolingo has been through many design phases. Formerly, it provided users with different "skills" placed along a "tree", where they could progress by completing every skill above them. The user could upgrade the skill at any time, with the final goal of turning it "golden" or "legendary". In November 2022, Duolingo switched to an AI-assisted path, where the user's learning level is put on a single "path", including the stories.[Duolingo also provides a competitive space,such as in Leagues, where people can compete against their friends or see how they compare with randomly selected worldwide player groupings of up to 30 users. Rankings in Leagues are determined by the amount of "XP" (experience points) earned in a week. Badges in Duolingo represent achievements earned from completing specific objectives or challenges.

Given the scale of AI it is interesting to see how Duoling uses AI.

In a post on LinkedIn, Severin Hacker, co-founder of DuoLingo, said:

People know Duolingo for its personalized lessons, but we use AI in many other places across our products.

Our in-house experts spend a lot of time thinking about how AI can support and scale their work so that they can get new content to learners faster than ever before. Here is a non-exhaustive list of where AI enhances the Duolingo experience:

  • Assembling personalized lessons
  • Determining when learner’s should review old content
  • Generating interactive exercises from expert-created raw content
  • Auto-suggested text in freeform exercises
  • Generating a range of possible accepted translations
  • Grading exercises
  • Creating character voices
  • Generating DuoRadio scripts
  • Generating real-time responses in Role Play
  • Providing context on mistakes with Explain My Answer
  • Triggering character animations with Rive
  • The Duolingo English Test question generation and scoring
  • Deciding when to send push notifications

Fairly obviously some of these applications - such as generating translations and generating scripts - are based on Generative AI. And according to Wkipedia, in January 2024, after having laid off around ten percent of its contractors, Duolingo began using artificial intelligence to replace tasks usually done by its contractors. But I guess other uses of AI are not based on Gen AI. For example DuoLingo is big into motivation (I should know after using it for three years) and I guess that is using AI to analyse its huge data to decide when and what messages to send to motivate learners.

So - when we talk about AI in education we need to think beyond the current obsession with Generative AI.