AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI and the digital divide

How do we make AI accessible? If you're already excluded from the system how can you possibly catch up when technology is changing exponentially?
I will be moderating a breakout session on making AI familiar and fun to foster learner motivation at this online workshop for trainers - Creating An AI Literacy Training Model.
These free sessions are run by Digital Collective, a non-profit organisation that aims to tackle the increasing digital divide, that is not only creating an alarming distance to the labour market for individuals in vulnerable situations, but also causing social exclusion as society moves into the digital age.
10:30CET Thursday 26th October 2023

Pioneers in Practice – Word Clouds

In this series, I'm re-visiting our best loved teaching and training resources from the past 15 years of projects and updating them to reflect the changing world of ed-tech. Here are some ideas for using Word Clouds, originally contributed by Nic Daniels, one of the old Taccle2 team in Wales, and updated here by me, Angela Karadog to incorporate AI tools. Whilst this particular piece has been an interesting research journey, it's proof that adding AI doesn't necessarily improve an already useful tool.

We love word cloud software! It’s so simple to use and the possibilities are endless, for any topic at any level. We’ve outlined how to use it as a fun and quick lesson warm-up activity. It's also a great tool for learners to use too.

Using https://edwordle.net either type or cut and paste the focus text in the large white box. This traditionally includes topic vocabulary, spelling lists, poems or text extracts from books, to name but a few sources.

By now you’re probably used to making word clouds for spelling and vocabulary games e.g. put adjectives you’d like to revise in the box and press go to create a word cloud using the words you’ve provided. There's now also a cool updated version which allows you to fit the words into a shape. like I've done with this article in the illustration above.

So what about AI? Check out this Word Cloud generator from speakAI which uses natural language processing so you can also add a transcript, audio or video data as well as unstructured text, poetry or prose and the AI will then ‘Analyze’ the text to produce a visually appealing cloud with interactive words and, in the paid version, multiple ways to visualise your data. This tool from Shulex has done away with the need for a source text and will have ChatGPT generate the words for your cloud from a single key word - sort of an AI thesaurus in pastel. 

 

Teaching WITH AI

Here’s Nic's suggested activity with some minor tweaks.

Display the Wordle on the interactive whiteboard for a set time (30 seconds is usually enough) and the learners must make a record of as many words as they can in the allotted time.

If using edWordle http://www.edwordle.net/ Be sure to select easy to read orientations and fonts. 

Lastly, if you have pupils who prefer coloured screens as opposed to the harsh white, you can change the background in the ‘bg color’ box by clicking the FFFFFF

What do I need?

For a whole class activity, an interactive whiteboard or projector is pretty essential. You can however create Wordles for pupils to use individually or in pairs on a device or printed.

Timing device – I used an egg timer! But a clock, watch or stop-clock would do the same job.

This is one online resource that takes less time to create than doing similar activities non-electronically. Not only is it quick, but it’s endlessly adaptive! To create something similar on a poster or a conventional writing whiteboard/blackboard would take at least an hour. This is done and ready to use in under 5 minutes!

Tips
Beforehand, you may like to create a Word Cloud and ask learners which colour scheme, font and layout makes it easier for them to recognise the words. This is especially important if you have learners who read books using a coloured overlay.

The more words you use, the more complex the Word Cloud, so for younger or less able learners you may choose only 10 words. You can use the same word as many times as you want, this can also simplify the activity.

For us, the biggest attractions of using a Word Cloud is that:

  • it’s fun!
  • it really kick starts the lesson, ensuring learner engagement for the beginning.
  • you can save your Word Cloud and use it again and again.
  • it’s endlessly adaptive

Teaching ABOUT AI

The AI backed Word Cloud software picks out words and sentiments in the text that it has learned are useful to us whereas the non-AI software is only looking for word count and returning the most popular words. The non-AI clouds will automatically contain numbers and mis-spellings unless you specifically tell it to exclude them.

With that in mind, lets compare and contrast the same text fed into both types of cloud generator.

Use the activity as a basis for discussion;

  1. What rules might each word cloud tool be following?
  2. As humans, what are we hoping to see when we create a word cloud?
  3. Does AI have a role to play in analysing texts?

As an example I pasted the exact same text, the first 20 pieces of 'Advice to a Wife' from project gutenberg into some word cloud generators. The results did surprise me, I was genuinely expecting the AI to do something clever. Personally I prefer the non-AI result, but if I were analysing my web content to improve the SEO or adapting my campaign to give it a more positive spin, the sentiment analysis tools would better come in to play.

 

Pioneers in Practice – teaching with and about AI

water colour painting of a middle aged white female teacher with dark-pink curly shoulder-length hair wearing a green v-neck dress and red rimmed glasses working at a mac computer

Over the past 15 years or so, Pontydysgu has created hundreds of free digital Open Educational Resources for teachers, trainers and educators to use, re-use and adapt to their own needs. Given the recent advances in AI tools, generative AI, natural language processing etc, I thought it would be pertinent to revisit our old, well loved resources and give them an AI inclusive update, I'll likely be including a few new ones too. Over the coming weeks expect a return to the chalk-face with scenarios, practical ideas, hints and tips in the Ange's Scribbles corner of the Pontydysgu blog. I'll also be exploring new pedagogies and old learning theories with AI in mind.

As always if you have a great idea for a contribution, text, video or podcast, get in touch.

Featured image generated by pixlr.com

Prompt: water colour painting of a middle aged white female teacher with dark-pink curly shoulder-length hair wearing a green v-neck dress and red rimmed glasses working at a mac computer