UNESCO AI Competency Framework for Teachers

Last weeks UNESCO Digital Learning conference attracted attendees from over the world and significant press and social media interest. Much of the focus was on AI and education, especially around the UNESCO publication of what they say is the first-ever global Guidance on Generative AI in Education and Research, designed to address the disruptions caused by Generative AI technologies. A recent UNESCO global survey of over 450 schools and universities showed that less than 10% of them had institutional policies and/or formal guidance concerning the use of generative AI applications, largely due to the absence of national regulations. The UNESCO Guidance sets out "seven key steps for governments should take to regulate Generative AI and establish policy frameworks for its ethical use in education and research, including through the adoption of global, regional or national data protection and privacy standards. It also sets an age limit of 13 for the use of AI tools in the classroom and calls for teacher training on this subject." Perhaps more significant for those of us working on competences for teachers and trainers in using AI for teaching and learning (as in the AI pioneers European project) was the publication of the UNESCO AI Competency Frameworks for Teachers and School Students. In a draft discussion document they say the "AI CFT responds to the stated gap in knowledge and experience globally and offers initial guidance on how teachers can be prepared for a growing AI-powered education system." They go on to explain:
The AI CFT is targeted at a wide-ranging teacher community, including pre-service and in-service teachers, teacher educators and trainers in formal, non-formal education institutions, policymakers, officials and staff involved in teacher professional learning ecosystems from early childhood development, basic education, to higher and tertiary education.... The purpose of the AI CFT is to provide an inclusive framework that can guide teachers, teaching communities and the teacher education systems worldwide to leverage the educational affordances of AI, and develop the critical agency, knowledge, skills, attitudes and values needed to manage the risks and threats associated with AI. It promotes the responsible, ethical, equitable and inclusive design and use of AI in education.
The draft discussion document provides a diagram of a High-level Structure of the proposed AI Competency Framework for Teachers.
Further diagrams provide progression routes and more detailed contents for the Framework. The main criticism in social media was not so much the content of the Framework, but that the Framework is based on Blooms taxonomy, with some asserting that the taxonomy is outdated and doubts being raised as to whether teachers would be able to follow an orderly progression route around AI. UNESCO Have asked for feedback on both the Framwork for Teachers and the Framework for students on an online form.

What will happen to jobs with the rise and rise of Generative AI

Photo by Xavier von Erlach on Unsplash

OK where to start? First what is Generative AI? It is the posh term for things like ChatGPT from OpenAI or Bard from Google. And these Generative AIs based on Large Language Models are fast being integrated into all kinds of applications starting out with the chatbot integrated into Microsoft Bing browser and Dalll-E just one of applications generating images from text or chat descriptions.

Predicting what will happen with jobs is a tricky business. Jobs have been threatened by successive waves of technology. In general the overall effect on employment appears to have been less than was predicted. Of course there was a vast shift in employment with the advent of mechanization in agriculture but that took place around the end of the 19th century at least in some countries. And its pretty easy to find jobs that have disappeared in recent times - for instance employment in video shops. But in general it appears that disruption has been less than predicted in various surveys and reports. Technology has been used to increase productivity - for example in shops using self checkouts and automated stock management - or has been used to complement working processes and tasks rather than substitute for workers and the generation of new jobs to work with the technology

But what is going to happen this time round with all sorts of predictions and speculation - not helped by no-one quite knowing what Generative AI is capable of and even harder what it will be able to do in the very near future. Bill Gates (the founder of Microsoft) has said the development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. There is too much press and media speculation to even sum up the general reaction to the release of these new AI models and applications although Stephen Downes is making a valiant attempt in his OLDaily newsletter. Personally I enjoyed UK restaurant critic, Jay Raynor's account in the Guardian newspaper of when he asked ChatGPT to write a restaurant review in his own inimitable style. Of course, along with concerns over the impact on employment and jobs, there is much concern over the ethical implications of the new AI models although it is worth noting Ilkka Tuomi writing on LinkedIn (his posts are well worth following) has noted that the EU has been an early mover in policy and regulation. Ilkka also, while noting that education (and teaching) is more than just knowledge transformation, says "dialogue and learning by teaching are very powerful pedagogical approaches and generative AI can be used in many different ways in learning and education:. He concludes by saying: "This really could have a transformative impact."

Anyway back to the more general impact on jobs which is an issue for the new EU AI Pioneers project which focuses on the impact on Vocational Education and Training and Adult Education. Last weekend saw the release of a report by Goldman Sachs predicating that as many as 300 million jobs could be affected by generative AI and the labor market could face significant disruption. However they suggest that :most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI". In the US they estimate 7% of jobs could be replaced by AI, with 63% being complemented by AI and 30% being unaffected by it. Perhaps one of the reasons for so much concern is that this wave of automation seems to be most likely to impact on skilled work with, say Goldman Sachs, office and administrative support positions at the greatest risk of task replacement (46%(, followed by legal positions (44%) and architecture and engineering jobs (37%).

What I found most interesting from the full report (rather than the press summaries) is the methodology. The report includes a quite detailed description. It says:

Generative AI’s ability to 1) generate new content that is indistinguishable from human-created output and 2) break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects.

The report is based on "data from the O*NET database on the task content of over 900 occupations in the US (and later extend to over 2000 occupations in the European ESCO database) to estimate the
share of total work exposed to labor-saving automation by AI by occupation and industry." They assume that AI is capable of completing tasks up to a difficulty of 4 on the 7-point O*NET “level” scale and
"then take an importance- and complexity-weighted average of essential work tasks for each occupation and estimate the share of each occupation’s total workload that AI has the potential to replace." They "further assume that occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI."

What are the implications for Vocational Education and Training and Adult Education? It seems clear that very significant number of workers are going to need some form of training or Professional Development - at a general level for working with AI and at a more specific level for undertaking new work tasks with AI. There is little to suggest present education and training systems in Europe can meet these needs, even if we expect a ramping up of online provision. The EU's position seems to be to push the development of Microcredentials which according the the EU Cedefop agency "are seen to be fit for purposes such as addressing the needs of the labour market, lifelong learning, upskilling and reskilling, recognising prior learning, and widening access to a greater variety of learners. Yet in their recent report, they say that

"Microcredentials tend to be a flexible, demand-driven response to the need for skills in the labour market, but they can lack the same trust and recognition enjoyed by full qualifications. In terms of whether and how they might be accommodated within qualification systems, they can pose important questions about how to guarantee their value and currency without undermining both their own flexibility and the stability and dependability of established qualifications."

The need for new skills for AI pose a question for how curricula can be adapted and updated faster than has been done traditionally. And they pose major questions for institutions to adapting course provsion to to new skill needs at a local and regional level as well as national. Of course there are major challenges for the skills and competences of teachers and trainers, who, the AI and VET project found, were generally receptive to embracing AI for teaching and learning as well as new curricula content, but felt the need for more support and professional training to update their own skills and knowledge (and this was before the launch of Generative AI models.

All in all, there is a lot to think about here.

Public values are key to efficient education and research

For those of us who have been working on AI in Education it is a bit of a strange time. On the one hand it is not difficult any longer to interest policy makers, managers or teachers and trainers in AI. But on the other hand, at the moment AI seems to be conflated with the hype around Chat GPT. As one senior policy person said to me yesterday: "I hadn't even heard of Generative AI models until two weeks ago."

And of course there's a loge more things happening or about to happen on not just the AI side but in general developments and innovation with technology that is likely to impact on education. So much in fact that it is hard to keep up. But I think it is important to keep up and not just leave the developing technology to the tech researchers. And that is why I am ultra impressed with the new publication from the Netherlands SURF network - 'Tech Trends 2023'.

In the introduction they say

This trend report aims to help us understand the technological developments that are
going on around us, to make sense of our observations, and to inspire. We have chosen
the technology perspective to provide an overview of signals and trends, and to show
some examples of how the technology is evolving.

Surf scanned multiple trend reports and market intelligence services to identify the big technology themes. They continue:

We identified some major themes: Extended Realities, Quantum, Artificial intelligence,
Edge, Network, and advanced computing. We believe these themes cover the major technological developments that are relevant to research and education in the coming years.

But what I particularly like is for each trend the link to to public values and the readiness level as well. The values are taken from the diagram above. As SURF say "public values are key to efficient education
and research."

Chat GPT and Assessment

Photo by John Schnobrich on Unsplash

n the last few weeks the discussions about technology for education and learning have been dominated by the impact of GPT3 on the future of education – discussion which as Alexandra Mihai characterises in a blog entitled Lets get off the fear carousel as “hysteria”.

The way I see it, she says, is “academia’s response to ChatGPT is more about academic culture than about the tool itself.” As she posts out AI tools are not new and are already in use in a wide range of applications commonly used in education. But probably the most concern or even panic being seen about ChatGPT is in relation to assessment.

Alexandra draws attention to 7 things that the current debate reveals about our academic culture. Although she is focused on Higher Education much the same applies to Vocational Education and Training although I think that many teachers and trainers in VET may be more open to AI, given how it already plays a considerable role in the jobs vocational students are being trained for.

Her 7 things are:

  • Lots of pressure/ high workloads: regardless of our positions, everyone seems to be under a great amount of pressure to perform
  • Non-transparent procedures: university administration is very often a black box with missing or inefficient communication channels
  • Lack of trust in students: this very harmful narrative is unfortunately a premise for many educators, not entirely (or not always) out of bad will but rather stemming from a teacher-centred paradigm which emphasises the idea of control.
  • Stale quality assurance (QA) policies: quality assurance in education is a complex mix of many factors (including faculty professional development, technology integration academic integrity policies, to name just the more relevant ones for the current debate)
  • Inertia: the biggest enemy, in her opinion. Responding to change in a timely and efficient manner is not one of the strong points of HE institutions.
  • Technological determinism ): the only thing that is, she feels, equally if not more dangerous that banning technology is thinking it can solve all problems.

Alexandra wants us to “take a moment to actually talk to and really listen to our students?” She says: :”All this will help us understand them better and design learning experiences that make sense to them. Not necessarily assignments where they cannot cheat, but activities and assignments they genuinely want to engage in because they see them as relevant for their present and their future.”

In an earlier blog she invites us to select on two questions.

Firstly, how do you balance three assessment purposes – students’ expertise development, Backward design and constructive alignment and Feasibility for students, teachers and organisation.

Secondly how do you take into account the three principles for optimally balancing different assessment purposes, in order to guide students towards professional independence?

There is no shortage of resources on ChatGTP in education: a list which is growing by the day. Here is 5 that Alexandra suggests:

Assessment in the age of artificial intelligence– great article by Zachari Swiecki et al., with a lot of insights into how we can rethink assessment in a meaningful way:
Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT– interesting read by Debby Cotton et al., suggests a range of strategies that universities can adopt to ensure these tools are used ethically and responsibly;
Academic Integrity?- insightful reflection by Matthew Cheney on the concept of academic integrity and its ethical implications;
Critical AI: Adapting college writing for the age of language models such as ChatGPT: Some next steps for educators, by Anna Mills and Lauren Goodlad- a useful collection of practices and resources on language models, text generators and AI tools;
ChatGPT Advice Academics Can Use Now– very useful advice from various academics, compiled by Susan D’Agostino on how to harness the potential and avert the risks of AI technology.

Data governance, management and infrastructure

Photo by Brooke Cagle on Unsplash

The big ed-tech news this week is the merger of Anthology, an educational management company, with Blackboard who produce learning technology. But as Stephen Downes said "It's funny, though - the more these companies grow and the wider their enterprise capabilities become, the less relevant they feel, to me at least, to educational technology and online learning."

And there is a revealing quote in an Inside Higher Ed article about the merger. They quote Bill Bauhaus, Blackboards chairman, CEO and president as saying the power of the combined company will flow from its ability to bring data from across the student life cycle to bear on student and institutional performance. "We're on the cusp of breaking down the data silos: that often exist between administrative and academic departments on campuses, Bauhaus said.

So is the new company really about educational technology or is it in reality a data company. And this raises many questions about who owns student data, data privacy and how institutions manage data. A new UK Open Data Institute (ODI) Fellow Report: Data governance for online learning by Janis Wong explores the data governance considerations when working with online learning data, looking at how educational institutions should rethink how they can better manage, protect and govern online learning data and personal data.

In a summary of the report, the ODI say:

The Covid-19 pandemic has increased the adoption of technology in education by higher education institutions in the UK. Although students are expected to return to in-person classes, online learning and the digitisation of the academic experience are here to stay. This includes the increased gathering, use and processing of digital data.

They go on to conclude:

Within online and hybrid learning, university management needs to consider how different forms of online learning data should be governed, from research data to teaching data to administration and the data processed by external platforms.

Online and hybrid learning needs to be inclusive and institutions have to address the benefits to, and concerns of, students and staff as the largest groups of stakeholders in delivering secure and safe academic experiences. This includes deciding what education technology platforms should be used to deliver, record and store online learning content, by comparing the merits of improving user experience against potential risks to vast data collection by third parties.

Online learning data governance needs to be considered holistically, with an understanding of how different stakeholders interact with each other’s data to create innovative, digital means of learning. When innovating for better online learning practices, institutions need to balance education innovation with the protection of student and staff personal data through data governance, management and infrastructure strategies.

The full report is available from the ODI web site.