Context is key to how we implement AI in teaching and learning

Here is the latest in our series of interviews with educators about Artificial Intelligence.

About

Arunangsu Chatterjee is Professor of Digital Health and Education in the School of Medicine, Faculty of Medicine and Health at the University of Leeds. 

He is the Dean of Digital Transformation for the University, responsible for driving forward the delivery of the University’s Digital Transformation strategy, with a particular focus on leading change programmes and projects in digital education, digital research, and digital operations areas. He has academic responsibility for the development of relevant digital transformation programmes, securing academic buy-in to change initiatives and leading delivery of initiatives through project activity into business as usual and embedding of activity. He works closely with project teams, teams in professional services and academic Faculties and Schools to lead and support digital transformation initiatives. As Professor of Digital Health and Education he works with the UK National Health Service developing a health competency framework.

Digital Transformation and Infrastructure

Educational institutions need to upgrade their infrastructure for researching and implementing AI including the provision of high-performance CPUs / GPUs allowing access to high performance computing. Institutions also need to recruit software engineers. This is problematic due to high labour market demand for such engineers and the limited pay available through public institutions.

“It is critical that we improve the research infrastructure and use AI to join the dots.” Arunangsu is aware that the cost of developing AI in areas with very high data need such as in healthcare may be too much for universities and certainly for vocational and adult education. But he believes AI can be used to develop the infrastructure, for instance through developing business / research platforms and through analyzing grant applications.

Implementation and Adoption

Arunangsu says that AI has reinforced the need for interdisciplinary networks.

Institutions should develop an AI roadmap with a bottom up and challenge-based approach. Partnerships are important especially at a regional level. The roadmap should be a collective plan with opportunities for everyone to buy in – including from different economic sectors.

Teacher and student roles

Banning AI by educational institutions is not helpful. We cannot stop students using it. We need to educate graduates in using AI. There are three key competences:

  • Tool awareness and selection
  • Prompt engineering and
  • Tool chaining

We need training for staff as well as students in these competences.

Context is key to how we implement AI in teaching and learning. Course design needs to incorporate Explainable AI. We can use AI to mine curricula and find the gaps.

We can look at the context of course and curricula provision in a region and its social and economic role.

Ethical and Social Implications

Arunangsu is less optimistic about the impact of AI on jobs. While he is opposed to the proposed three month moratorium on AI development, he sees a need for a slowdown and moratorium on job losses from AI. In an educational context he sees a high risk that AI will replace learning and content designers. He believes employers should not be using AI to cut costs but rather to improve productivity and quality. “Intelligent automation needs care. We need a new welfare system and pay if we do not want to end with civil unrest. AI led job cuts also pose a big heath challenge.

Arunangsu drew attention to the newly released Leeds University Guidance to Staff on the use of Artificial Intelligence and in particular to the instruction not to use online AI detection tools. Instead, he said the University is looking at new forms of assessment.

What does it mean to live in a world with AI?

This is the third in our series of expert interviews about AI and education.

Linda Castaneda is Associate Professor Educational Technology at the Universidad de Murcia in Spain where she teaches preservice teachers and other professionals related to learning and undertakes continuing professional development activities for teachers and university professors.

She has recently led a research project for the European Joint Research Centre around the European Frameworks and tools for technology for teachers and trainers:

  • DigCompOrg) - supporting the development of digital capacities within educational organisations;
  • DigCompEdu – the European framework for supporting teachers and trainers in the use of technology for teaching and learning;
  • SELFIE and SELFIE Work Based Learning tools for the self-assessment of readiness of both organisation and individuals for using technology for teaching and learning. m

The study involved the analysis of the use of the Frameworks in Spain to support educational digital transformation, seen as one of the European Member States with a deeper and more extensive use of the frameworks and tools, to learn from its practical experience. The aim was to extract lessons on how to adapt, apply and use the frameworks and tools to digitally transform the Spanish educational systems and to increase the educators´ digital competences together with its educational organisations´ digital capacities.

This report highlights the importance of DigCompEdu as a framework that goes beyond the instrumental view of the digital transformation of education, helping institutions to anticipate, design and structure it. SELFIE is seen as a fundamental tool for school awareness and digital planning. Furthermore, the results consolidate the evidence of diverse approaches to digital transformation, especially considering the context of Spain, where the competence of education is at the regional level.

Linda is also involved in two projects devoted to Foster DIGCOMPEDU in University Teachers, nationally and internationally.

Graham Attwell interviewed Linda Castaneda in September 2003.

Competence Frameworks

“The major problem is how to engage participants in the process of educational digital transformation. Teachers' Training is not meaningful. Students are not motivated. Teachers and trainers complain about how useful the programme is. A reason may be that the Framework of Competences – DigCompEdu is being taken as a curriculum. But before it is useful and can be applied, its contents must be localized, the jargon needs to be translated into something close to the day-to-day experience of teachers – it needs to be based on their practice and to be important for them. At the moment many teachers are not appreciating how useful the Framework could for their area of education.

We need to translate the global framework into something which they can take ownership of. We must realise that different territories, as well as different areas of knowledge – e.g.Engineers and lawyers– have little in common. Translation is needed to make it meaningful with them.

Institutions and governments are backing the use of the Frameworks because they consider a way to be connected with a global –European– vision about the future of education, and also because they are supported by European Union money. It is all about politics and impact.

Even though frameworks are a clear approach on what to do and the perspective where to go, they come from the Anglo Saxon epistemic tradition about learning and education, which focuses on course and time-based learning. Education –with a wider approach– should include informal learning from outside the institution.

Additionally, DigCompEdu is mainly based on Blooms taxonomy, but it largely ignores the issue of metacognition and agency – the power to enact self-directed learning. It is not in the framework but is in discussion of the framework.

Digital transitions

Digital transition in education is too often focused thinking on budgets and governance, not on approaches to teaching and learning. We try to implement as many devices as possible but without challenging poor pedagogical approaches. We saw the problems in that approach in the Covid emergency. Everything about digital transformation is about teaching using a digital device or online. The most advanced technology most teachers –specially at the university– know is Turnitin. This needs a professional approach, teachers are professionals of education.

The Challenge of AI in education

The challenge of AI is what it means to day-to-day work, to the human everyday life. What does it mean to live in a world with AI.

Now we have a very restrictive curriculum. There is a growing debate over how to reshape courses. We need to rethink the purpose of education for each case, and the role of teachers in that new definition of education– e.g. what is the point of teachers especially in technical education –, it is crucial redefine human roles. We need to reconfigure the role of people and think in broader terms. For instance, why do we have a shortage of teachers, can we really replace them with AI?.

This, as everything regarding transformation of education and learning, requires a strategic approach. It is not the responsibility of some, but a systemic issue that ask everyone to have a critical perspective and to redefine the elements of the institutional structures.

Is AI just another tool, or does it redefine the essence of competence itself?

This is the second of our interviews with experts on AI in education for the AI Pioneers project. Thr interview is with Ilkka Tuomi. Ilkka Tuomi is the Founder and Chief Scientist at Meaning Processing Ltd, an independent public research organization located in Helsinki, Finland. He previously worked at the European Commission's Joint Research Centre (JRC), Institute for Prospective Technological Studies, Seville, Spain. In 2020 he produced a background report for the European Parliament on the 'The use of Artificial Intelligence (AI) in education' and has recently produced a study 'On the Futures of technology in Education: Emerging Trends and Policy Implications' published as a JRC Science for Policy Report. He is writing and commenting regularly on AI on LinkedIn.

[Q1] Can you tell us about the motivation behind your recent publication for the EC Joint Research Centre and the future of technologies in learning?

[A1] My recent publication for the JRC was motivated by my curiosity about the future of learning and the rapidly changing technology landscape. I began by asking which technologies would be essential for policy considerations over the next decade. From this, I compiled a list of technologies that seemed promising for initial discussions. In the process, it became clear that a fundamentally new infrastructure for knowing and learning is emerging. We call this “the Next Internet” in the report. My goal was to both initiate a conversation and delve into the connections between these emerging technologies and new educational models. More broadly, I was interested in how these advancements might transform the education system itself. An essential part of my research also revolved around the evolving dynamics of knowledge production and the importance of innovation in knowledge society, and the implications this has for education. For instance, about the emerging sixth-generation networks offer intriguing sociological and cognitive perspectives, and even on the impact of AI on learning.

[Q2] How do new cognitive tools influence our understanding of learning?

[A2] These cognitive tools aren't just emerging as solutions to automate current practices. They delve much deeper, challenging our very understanding of what learning means and how it occurs. My perspective on this is shaped by my background in both AI and learning theory. I approach this topic from both a sociological viewpoint and in terms of how digital transformations impact society as a whole.

[Q3] Could you share some of your background and experiences in the field of AI?

[A3] When I was younger, I was deeply involved in neural networks research and even co-authored a book on the philosophy of AI back in 1989. Around this time, I joined the Nokia Research Center. Initially, I worked with knowledge-based systems and expert systems, in other words the good-old-fashioned AI. Over time, I transitioned towards human-computer mediated interaction and knowledge management. The latter is, of course, very much about learning and knowledge creation. While the buzz around AI is louder than ever today, I find a dearth of profound discussions on the topic. There's a pressing need for a deeper, more thoughtful debate.

[Q4] What impact do you foresee AI having on vocational education?

[A4] AI's impact on vocational education is twofold. Firstly, we're still uncertain about how AI will reshape vocations and the job market. However, it's evident that the essence of vocational training is undergoing change. Technologies, especially generative AI and other machine learning methodologies, will dramatically influence occupational structures and content. This will inevitably change what people learn. Much of what's taught in vocational schools today might become obsolete or require significant modifications. Many educators are concerned that the skills and knowledge they impart today may become irrelevant in just five years. On the other hand, AI will also change how we learn.

[Q5] How can these technologies be integrated into the educational process?

[A5] These technologies offer immense potential for educational applications. Already, there are tools that enable a generative AI system to process, for instance, technical handbooks and repair manuals. With this knowledge, the AI can then answer domain-specific queries, providing up-to-date information about tools and technologies on demand. Consider a trainee in the construction industry; they could access building schematics through AI without having to study them exhaustively. Multimodal AI interfaces could allow them to photograph an unfamiliar object and get guidance on its use. Such an application can be used in fields like automotive repair, where a mechanic can photograph a fault and receive advice on necessary parts and repair procedures. These tools not only aid in teaching but can also be directly implemented in professional settings. Such applications particularly resonate with vocational education, transforming the very core of professional knowledge and identity.

In today's rapidly evolving digital age, vocational education stands at a unique crossroads. At its core, vocational education is profoundly hands-on and concrete, focusing not on abstract knowledge but on tangible skills and real-world applications. It's about doing, making, and creating. And this is where multimodal Generative AI now comes into play.

Generative AI has the potential to integrate the concrete world with the abstract realm of digital information. Real-world objects and practical training exercises can be complemented by augmented and virtual reality environments powered by AI. We're on the brink of a transformative shift where AI will not just assist but redefine vocational training.

Furthermore, the economic implications of AI in this sphere are revolutionary. In the past, creating detailed digital representations of complex machinery, like airplanes, was a costly and time-consuming endeavor. Now, with Generative AI, these models can be produced with increased efficiency and reduced costs. Whether it's for pilot training or for a mechanic understanding an engine's intricate details, AI radically simplifies and economizes the process.

[Q6] Do we need to redefine what we mean by competence?

[A6] Traditionally, competence has been perceived as an individual's capability to perform tasks and achieve goals. It's often broken down into knowledge, skills, and attitudes. Education has historically focused on what I have called the epistemic competence components. The move towards “21st century skills and competences” is fundamentally about a shift towards behavioral competence components that include aptitudes, motives, and personality traits ranging from creativity to social capabilities.

However, an essential nuance often overlooked in our understanding of competence is the external environment. For instance, a highly skilled brain surgeon is only as competent as the tools and infrastructure available to him. It's not just about what resides in the individual's mind but also about the societal structures, technological tools, and the overarching environment in which they operate.

Reflecting on education and technology, the narrative becomes even more intricate. An educator's competence cannot be solely gauged by their ability to use digital tools. The broader context—whether a school has the required digital infrastructure or the societal norms and regulations around technology use—plays a pivotal role. Emphasizing technology for technology's sake can sometimes be counterproductive. The question arises: is AI just another tool, or does it redefine the essence of competence itself?

[Q7] What are the major challenges of AI?

[A7] Looking back, one can find parallels in the challenges faced by earlier technological innovations. My experience in the 1990s at Nokia serves as a poignant example. While AI was once viewed as a magic bullet solution, it soon became evident that the challenges in organizations were as much social as they were technological.

Communication is the heart of learning and innovation. It's not merely about making the right decisions or processing vast amounts of data. Instead, it's about the rich tapestry of human interactions that shape ideas, beliefs, and knowledge. The introduction of new technologies often disrupts existing knowledge structures and requires substantial social adaptation. The process, thus, becomes more about managing change and facilitating communication.

[IT1] [IT2] [Q8] What are the implications of AI for Agency

[A8] Humans have always externalized specific cognitive tasks to tools and technologies around them. In this light, AI doesn't stand as a looming threat but a natural progression, a tool that could enhance human cognition beyond our current boundaries. But AI is also different. Its increasing human-like interactivity and capabilities challenge our traditional, anthropocentric views on agency. In fact, one key message in our JRC report was that we need to understand better how agency is distributed in learning processes when AI is used.

Innovations like AI don't just supplement our existing reality—they redefine it. Grasping this intricate dance between societal evolution and our shifting reality is essential to fathom AI's transformative potential.

[Q39 How will AI shape the future of Education?

[A9] AI's purpose in education should be to enhance human capabilities. This enhancement isn't limited to just individual's cognitive functions; it spans the social and behavioral realms too. In contrast to the post-industrial era, when computers were increasingly used to automate manual and knowledge work, AI and the emerging next Internet are now fusing the material world and its digital representations into an actionable reality. This is something we have not seen before. The material basis of social and cultural production is changing. As a result, the nature of knowing is changing as well. My claim has been that, in such a world, education must reconceptualize its social objectives and functions. The development of human agency might well be the fundamental objective of education in this emerging world. We need to learn, not only how to do things, but also what to do and why. This may, of course, also require rethinking the futures of vocational education and training.


AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.