Unesco AI Competency Framework

Unesco have released the latest version of their draft competency framework for teachers in the use of AI. Fengchun Miao says "We expect that all teachers should be able develop and apply the competencies under the first level of progression (acquisition) across all 5 aspects through pre-service teacher preparation programmes or in-service trainings." Unesco are continuing their consultation around the Framework. #AI #AIED

Searching for the AI bridge builders

We need to democratise access to AI but the language we use to talk about it is a barrier.

There is a lot of fear around AI advances and this is perpetuated when only the ‘big tech’ and the academics have access to the tools, the theory and the conversations. For me this is a major theme in AI ethics right now. We can’t have conversations about ‘the black box around generative AI’ as if everyone understands the concept. Similarly ‘language models’, ‘the dynamics of knowledge production’, or ‘neural networks’. I suspect that I have already lost a large chunk of my friends and family and we are only on the first paragraph.

We talk a lot about bias in the data; there’s a great advert doing the rounds on social media at the moment where an AI was prompted to draw Barbie dolls from around the world. Some of the results are quite a shocking reflection on our own stereotypes and cultural tropes with German Barbie depicted in a Nazi uniform and African Barbie carrying a gun. AI may have created the images but we have supplied the data. It is an accessible depiction of bias, we need more accessible depictions of AI concepts.

As academics, researchers and professionals, what we don't see so easily is the bias innate in our own use of language around AI. It is the same in all industries, in all academic circles across all disciplines, we are so used to discussing with each other that we become stuck in our bubble of understanding, of acronyms and concepts. What we need is a giant pin, and we need more AI Pioneers to bridge the gap between theory and practice. More people willing to stop and ask questions. More translators of AI speak. More people who are comfortable in both worlds, who do not feel alienated by the academic circles and equally do not alienate practitioners, who, lets face it, are the real experts here. It is the practitioners who will be finding innovative ways to teach with and about the tools, and as with all previous ed-tech advances, it is the practitioners who will work out how to ‘hack’ the systems to fit their contexts. It is also the trainers who will be on the ground working with learners with poor digital literacy, trying to engage and enthuse them to not be automated out of a job.

I’d like to think that my work and that the projects Pontydysgu are involved with fit the gap nicely, providing introductory materials and creative ways to use AI tools, but I was reminded by a group of trainers I ran a workshop with recently of the need to slow down, take things back to basics. 

When I first started out in edtech I was the trainer-in-training, in one session billed as a ‘hands-on practical introduction to e-learning’ the instructor showed us how learners’ work could be exhibited on a website - it was new and exciting, the dawn of web2.0, everyone in the room was eager to learn how. But we were then left with the bamboozling task of “now build a website.”

In my workshop, I heard the words “now use that to build a bot” escape my mouth and realised that the student had truly become the master. 

We need to remember to put the scaffolding into place so as not to lose people over the edge, and that includes explaining ourselves clearly or at least signposting people who can. To quote Einstein, “If you can't explain it simply, you don't understand it well enough” If you are one of those people, a gap-bridger, a mediator, an educator and also an AI enthusiast I warmly invite you to join the AI Pioneers network. Use the contact form on our website to get in touch, join in the conversation on Mastodon (like Twitter but without the megalomania) or find us via LinkedIn.

 

Designing new social AI systems for education

UNESCO-UNEVOC/Ludi Yana under CC BY-NC-SA 4.0 IGO

Very much like the conclusion to Mike Sharples paper, 'Towards social generative AI for education: theory, practices and ethics':

Designing new social AI systems for education requires more than fine tuning existing language models for educational purposes. It requires building GAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and
experience with networked machine learning.

Context is key to how we implement AI in teaching and learning

Here is the latest in our series of interviews with educators about Artificial Intelligence.

About

Arunangsu Chatterjee is Professor of Digital Health and Education in the School of Medicine, Faculty of Medicine and Health at the University of Leeds. 

He is the Dean of Digital Transformation for the University, responsible for driving forward the delivery of the University’s Digital Transformation strategy, with a particular focus on leading change programmes and projects in digital education, digital research, and digital operations areas. He has academic responsibility for the development of relevant digital transformation programmes, securing academic buy-in to change initiatives and leading delivery of initiatives through project activity into business as usual and embedding of activity. He works closely with project teams, teams in professional services and academic Faculties and Schools to lead and support digital transformation initiatives. As Professor of Digital Health and Education he works with the UK National Health Service developing a health competency framework.

Digital Transformation and Infrastructure

Educational institutions need to upgrade their infrastructure for researching and implementing AI including the provision of high-performance CPUs / GPUs allowing access to high performance computing. Institutions also need to recruit software engineers. This is problematic due to high labour market demand for such engineers and the limited pay available through public institutions.

“It is critical that we improve the research infrastructure and use AI to join the dots.” Arunangsu is aware that the cost of developing AI in areas with very high data need such as in healthcare may be too much for universities and certainly for vocational and adult education. But he believes AI can be used to develop the infrastructure, for instance through developing business / research platforms and through analyzing grant applications.

Implementation and Adoption

Arunangsu says that AI has reinforced the need for interdisciplinary networks.

Institutions should develop an AI roadmap with a bottom up and challenge-based approach. Partnerships are important especially at a regional level. The roadmap should be a collective plan with opportunities for everyone to buy in – including from different economic sectors.

Teacher and student roles

Banning AI by educational institutions is not helpful. We cannot stop students using it. We need to educate graduates in using AI. There are three key competences:

  • Tool awareness and selection
  • Prompt engineering and
  • Tool chaining

We need training for staff as well as students in these competences.

Context is key to how we implement AI in teaching and learning. Course design needs to incorporate Explainable AI. We can use AI to mine curricula and find the gaps.

We can look at the context of course and curricula provision in a region and its social and economic role.

Ethical and Social Implications

Arunangsu is less optimistic about the impact of AI on jobs. While he is opposed to the proposed three month moratorium on AI development, he sees a need for a slowdown and moratorium on job losses from AI. In an educational context he sees a high risk that AI will replace learning and content designers. He believes employers should not be using AI to cut costs but rather to improve productivity and quality. “Intelligent automation needs care. We need a new welfare system and pay if we do not want to end with civil unrest. AI led job cuts also pose a big heath challenge.

Arunangsu drew attention to the newly released Leeds University Guidance to Staff on the use of Artificial Intelligence and in particular to the instruction not to use online AI detection tools. Instead, he said the University is looking at new forms of assessment.