Is AI just another tool, or does it redefine the essence of competence itself?

This is the second of our interviews with experts on AI in education for the AI Pioneers project. Thr interview is with Ilkka Tuomi. Ilkka Tuomi is the Founder and Chief Scientist at Meaning Processing Ltd, an independent public research organization located in Helsinki, Finland. He previously worked at the European Commission's Joint Research Centre (JRC), Institute for Prospective Technological Studies, Seville, Spain. In 2020 he produced a background report for the European Parliament on the 'The use of Artificial Intelligence (AI) in education' and has recently produced a study 'On the Futures of technology in Education: Emerging Trends and Policy Implications' published as a JRC Science for Policy Report. He is writing and commenting regularly on AI on LinkedIn.

[Q1] Can you tell us about the motivation behind your recent publication for the EC Joint Research Centre and the future of technologies in learning?

[A1] My recent publication for the JRC was motivated by my curiosity about the future of learning and the rapidly changing technology landscape. I began by asking which technologies would be essential for policy considerations over the next decade. From this, I compiled a list of technologies that seemed promising for initial discussions. In the process, it became clear that a fundamentally new infrastructure for knowing and learning is emerging. We call this “the Next Internet” in the report. My goal was to both initiate a conversation and delve into the connections between these emerging technologies and new educational models. More broadly, I was interested in how these advancements might transform the education system itself. An essential part of my research also revolved around the evolving dynamics of knowledge production and the importance of innovation in knowledge society, and the implications this has for education. For instance, about the emerging sixth-generation networks offer intriguing sociological and cognitive perspectives, and even on the impact of AI on learning.

[Q2] How do new cognitive tools influence our understanding of learning?

[A2] These cognitive tools aren't just emerging as solutions to automate current practices. They delve much deeper, challenging our very understanding of what learning means and how it occurs. My perspective on this is shaped by my background in both AI and learning theory. I approach this topic from both a sociological viewpoint and in terms of how digital transformations impact society as a whole.

[Q3] Could you share some of your background and experiences in the field of AI?

[A3] When I was younger, I was deeply involved in neural networks research and even co-authored a book on the philosophy of AI back in 1989. Around this time, I joined the Nokia Research Center. Initially, I worked with knowledge-based systems and expert systems, in other words the good-old-fashioned AI. Over time, I transitioned towards human-computer mediated interaction and knowledge management. The latter is, of course, very much about learning and knowledge creation. While the buzz around AI is louder than ever today, I find a dearth of profound discussions on the topic. There's a pressing need for a deeper, more thoughtful debate.

[Q4] What impact do you foresee AI having on vocational education?

[A4] AI's impact on vocational education is twofold. Firstly, we're still uncertain about how AI will reshape vocations and the job market. However, it's evident that the essence of vocational training is undergoing change. Technologies, especially generative AI and other machine learning methodologies, will dramatically influence occupational structures and content. This will inevitably change what people learn. Much of what's taught in vocational schools today might become obsolete or require significant modifications. Many educators are concerned that the skills and knowledge they impart today may become irrelevant in just five years. On the other hand, AI will also change how we learn.

[Q5] How can these technologies be integrated into the educational process?

[A5] These technologies offer immense potential for educational applications. Already, there are tools that enable a generative AI system to process, for instance, technical handbooks and repair manuals. With this knowledge, the AI can then answer domain-specific queries, providing up-to-date information about tools and technologies on demand. Consider a trainee in the construction industry; they could access building schematics through AI without having to study them exhaustively. Multimodal AI interfaces could allow them to photograph an unfamiliar object and get guidance on its use. Such an application can be used in fields like automotive repair, where a mechanic can photograph a fault and receive advice on necessary parts and repair procedures. These tools not only aid in teaching but can also be directly implemented in professional settings. Such applications particularly resonate with vocational education, transforming the very core of professional knowledge and identity.

In today's rapidly evolving digital age, vocational education stands at a unique crossroads. At its core, vocational education is profoundly hands-on and concrete, focusing not on abstract knowledge but on tangible skills and real-world applications. It's about doing, making, and creating. And this is where multimodal Generative AI now comes into play.

Generative AI has the potential to integrate the concrete world with the abstract realm of digital information. Real-world objects and practical training exercises can be complemented by augmented and virtual reality environments powered by AI. We're on the brink of a transformative shift where AI will not just assist but redefine vocational training.

Furthermore, the economic implications of AI in this sphere are revolutionary. In the past, creating detailed digital representations of complex machinery, like airplanes, was a costly and time-consuming endeavor. Now, with Generative AI, these models can be produced with increased efficiency and reduced costs. Whether it's for pilot training or for a mechanic understanding an engine's intricate details, AI radically simplifies and economizes the process.

[Q6] Do we need to redefine what we mean by competence?

[A6] Traditionally, competence has been perceived as an individual's capability to perform tasks and achieve goals. It's often broken down into knowledge, skills, and attitudes. Education has historically focused on what I have called the epistemic competence components. The move towards “21st century skills and competences” is fundamentally about a shift towards behavioral competence components that include aptitudes, motives, and personality traits ranging from creativity to social capabilities.

However, an essential nuance often overlooked in our understanding of competence is the external environment. For instance, a highly skilled brain surgeon is only as competent as the tools and infrastructure available to him. It's not just about what resides in the individual's mind but also about the societal structures, technological tools, and the overarching environment in which they operate.

Reflecting on education and technology, the narrative becomes even more intricate. An educator's competence cannot be solely gauged by their ability to use digital tools. The broader context—whether a school has the required digital infrastructure or the societal norms and regulations around technology use—plays a pivotal role. Emphasizing technology for technology's sake can sometimes be counterproductive. The question arises: is AI just another tool, or does it redefine the essence of competence itself?

[Q7] What are the major challenges of AI?

[A7] Looking back, one can find parallels in the challenges faced by earlier technological innovations. My experience in the 1990s at Nokia serves as a poignant example. While AI was once viewed as a magic bullet solution, it soon became evident that the challenges in organizations were as much social as they were technological.

Communication is the heart of learning and innovation. It's not merely about making the right decisions or processing vast amounts of data. Instead, it's about the rich tapestry of human interactions that shape ideas, beliefs, and knowledge. The introduction of new technologies often disrupts existing knowledge structures and requires substantial social adaptation. The process, thus, becomes more about managing change and facilitating communication.

[IT1] [IT2] [Q8] What are the implications of AI for Agency

[A8] Humans have always externalized specific cognitive tasks to tools and technologies around them. In this light, AI doesn't stand as a looming threat but a natural progression, a tool that could enhance human cognition beyond our current boundaries. But AI is also different. Its increasing human-like interactivity and capabilities challenge our traditional, anthropocentric views on agency. In fact, one key message in our JRC report was that we need to understand better how agency is distributed in learning processes when AI is used.

Innovations like AI don't just supplement our existing reality—they redefine it. Grasping this intricate dance between societal evolution and our shifting reality is essential to fathom AI's transformative potential.

[Q39 How will AI shape the future of Education?

[A9] AI's purpose in education should be to enhance human capabilities. This enhancement isn't limited to just individual's cognitive functions; it spans the social and behavioral realms too. In contrast to the post-industrial era, when computers were increasingly used to automate manual and knowledge work, AI and the emerging next Internet are now fusing the material world and its digital representations into an actionable reality. This is something we have not seen before. The material basis of social and cultural production is changing. As a result, the nature of knowing is changing as well. My claim has been that, in such a world, education must reconceptualize its social objectives and functions. The development of human agency might well be the fundamental objective of education in this emerging world. We need to learn, not only how to do things, but also what to do and why. This may, of course, also require rethinking the futures of vocational education and training.


AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI and the digital divide

How do we make AI accessible? If you're already excluded from the system how can you possibly catch up when technology is changing exponentially?
I will be moderating a breakout session on making AI familiar and fun to foster learner motivation at this online workshop for trainers - Creating An AI Literacy Training Model.
These free sessions are run by Digital Collective, a non-profit organisation that aims to tackle the increasing digital divide, that is not only creating an alarming distance to the labour market for individuals in vulnerable situations, but also causing social exclusion as society moves into the digital age.
10:30CET Thursday 26th October 2023

Pioneers in Practice – Word Clouds

In this series, I'm re-visiting our best loved teaching and training resources from the past 15 years of projects and updating them to reflect the changing world of ed-tech. Here are some ideas for using Word Clouds, originally contributed by Nic Daniels, one of the old Taccle2 team in Wales, and updated here by me, Angela Karadog to incorporate AI tools. Whilst this particular piece has been an interesting research journey, it's proof that adding AI doesn't necessarily improve an already useful tool.

We love word cloud software! It’s so simple to use and the possibilities are endless, for any topic at any level. We’ve outlined how to use it as a fun and quick lesson warm-up activity. It's also a great tool for learners to use too.

Using https://edwordle.net either type or cut and paste the focus text in the large white box. This traditionally includes topic vocabulary, spelling lists, poems or text extracts from books, to name but a few sources.

By now you’re probably used to making word clouds for spelling and vocabulary games e.g. put adjectives you’d like to revise in the box and press go to create a word cloud using the words you’ve provided. There's now also a cool updated version which allows you to fit the words into a shape. like I've done with this article in the illustration above.

So what about AI? Check out this Word Cloud generator from speakAI which uses natural language processing so you can also add a transcript, audio or video data as well as unstructured text, poetry or prose and the AI will then ‘Analyze’ the text to produce a visually appealing cloud with interactive words and, in the paid version, multiple ways to visualise your data. This tool from Shulex has done away with the need for a source text and will have ChatGPT generate the words for your cloud from a single key word - sort of an AI thesaurus in pastel. 

 

Teaching WITH AI

Here’s Nic's suggested activity with some minor tweaks.

Display the Wordle on the interactive whiteboard for a set time (30 seconds is usually enough) and the learners must make a record of as many words as they can in the allotted time.

If using edWordle http://www.edwordle.net/ Be sure to select easy to read orientations and fonts. 

Lastly, if you have pupils who prefer coloured screens as opposed to the harsh white, you can change the background in the ‘bg color’ box by clicking the FFFFFF

What do I need?

For a whole class activity, an interactive whiteboard or projector is pretty essential. You can however create Wordles for pupils to use individually or in pairs on a device or printed.

Timing device – I used an egg timer! But a clock, watch or stop-clock would do the same job.

This is one online resource that takes less time to create than doing similar activities non-electronically. Not only is it quick, but it’s endlessly adaptive! To create something similar on a poster or a conventional writing whiteboard/blackboard would take at least an hour. This is done and ready to use in under 5 minutes!

Tips
Beforehand, you may like to create a Word Cloud and ask learners which colour scheme, font and layout makes it easier for them to recognise the words. This is especially important if you have learners who read books using a coloured overlay.

The more words you use, the more complex the Word Cloud, so for younger or less able learners you may choose only 10 words. You can use the same word as many times as you want, this can also simplify the activity.

For us, the biggest attractions of using a Word Cloud is that:

  • it’s fun!
  • it really kick starts the lesson, ensuring learner engagement for the beginning.
  • you can save your Word Cloud and use it again and again.
  • it’s endlessly adaptive

Teaching ABOUT AI

The AI backed Word Cloud software picks out words and sentiments in the text that it has learned are useful to us whereas the non-AI software is only looking for word count and returning the most popular words. The non-AI clouds will automatically contain numbers and mis-spellings unless you specifically tell it to exclude them.

With that in mind, lets compare and contrast the same text fed into both types of cloud generator.

Use the activity as a basis for discussion;

  1. What rules might each word cloud tool be following?
  2. As humans, what are we hoping to see when we create a word cloud?
  3. Does AI have a role to play in analysing texts?

As an example I pasted the exact same text, the first 20 pieces of 'Advice to a Wife' from project gutenberg into some word cloud generators. The results did surprise me, I was genuinely expecting the AI to do something clever. Personally I prefer the non-AI result, but if I were analysing my web content to improve the SEO or adapting my campaign to give it a more positive spin, the sentiment analysis tools would better come in to play.