What is the purpose of Vocational Education and Training?

Photo by Jeswin Thomas on Unsplash

Is Artificial Intelligence challenging us to rethink the purpose of Vocational Education and Training? Perhaps that is going too far, but there are signs of questions being asked. For the last twenty five years or so there has been a tendency in most European countries for a narrowing of the aims of VET, driven by an agenda of employability. Workers have become responsible for their own employability under the slogan of Lifelong Learning. Learning to learn has become a core skill for students and apprentices, not to broaden their education but rather to be prepared to update their skills and knowledge to safeguard their employability.

It wasn’t always so. The American philosopher, psychologist, and educational reformer John Dewey believed “the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good.” The overriding theme of Dewey's work was his profound belief in democracy, be it in politics, education, or communication and journalism and he considered participation, not representation, the essence of democracy.

Faced with the challenge of generative AI, not only to the agency and motivation of learners, but to how knowledge is developed and shared within society, there is a growing understanding that a broader approach to curriculum and learning in Vocational Education and Training is necessary. This includes a more advanced definition of digital literacy to develop a critique of the outputs from Large Language Models. AI literacy is defined as the knowledge and skills necessary to understand, critically evaluate, and effectively use AI technologies (Long & Magerko, 2020) including understanding the capabilities and limitations of AI systems, recognising potential biases and ethical implications of AI-generated content  and developing critical thinking skills to evaluate AI-produced information .

UNESCO says their citizenship education, including the competence frameworks for teachers and for students, builds on peace and human rights principles, cultivating essential skills and values for responsible global citizens. It fosters criticality, creativity, and innovation, promoting a shared sense of humanity and commitment to peace, human rights, and sustainable development. Fenchung Miao from UNESCO has said the AI competency framework for students proposed the term of "AI society citizenship" and provided interpretation in multiple sections. Section 1.3 of the Framework, AI Society Citizenship says:

Students are expected to be able to build critical views on the impact of AI on human societies and expand their human centred values to promoting the design and use of AI for inclusive and sustainable development. They should be able to solidify their civic values and the sense of social responsibility as a citizen in an AI society. Students are also expected to be able to reinforce their open minded attitude and lifelong curiosity about learning and using AI to support self actualisation in the AI era.

The Council of Europe says Vocational Education and Training is an integral part of the entire educational system and shares its broader aim of preparing learners not only for employment, but also for life as active citizens in democratic societies. Social dialogue and corporate social responsibility are seen as tools for democratising AI in work.

Renewing the democratic and civic mission of education underlines the importance of integrating Competences for Democratic Culture (CDC) in VET to promote quality citizenship education. This initiative aims to support VET systems in preparing learners not only for employment but also for active participation as citizens in culturally diverse democratic societies. By embedding CDC in learning processes in VET, the Council of Europe aims to ensure that VET learners acquire the necessary knowledge, skills, values and attitudes to participate fully in democratic life.

The Council of Europe Reference Framework for Democratic Culture and the Unesco AI Competence Framework can provide a focus for a wider understanding of AI competences in VET and provide a challenge for how they can be implemented in practice. 

Such an understanding can shape an educational landscape that leverages AI while safeguarding human agency, motivation, and ethics. As generative AI advances, continuous dialogue and investigation among all educational stakeholders are essential to ensure these technologies enhance learning outcomes and equip students for an AI-driven future.

References

Dewey, J. (1916) Democracy and Education: an introduction to the philosophy of education, New York: Macmillan. https://archive.org/stream/democracyandedu00dewegoog#page/n6/mode/2up. Retrieved 4 May 2024 

Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

UNESCO (2024) AI Competency Framework for Students, https://unesdoc.unesco.org/ark:/48223/pf0000391105

What are Learning Tools?

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY 4.0

There's an interesting post from Philippa Hardman in her newsletter today. Entitled Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education?  her research asks how much and how well do popular AI tools really support human learning and, in the process, disrupt education?
She created a simple evaluation rubric to explore five key research questions: 

1. Inclusion of Information

2. Exclusion of Information

3. [De]Emphasis of Information

4. Structure & Flow

5. Tone & Style

Philippa Hardman used her own research articles as the input material, which she fed into what she says are considered to be the three big AI tools for learning: 

  1. ChatGPT 4o
  2. Claude 3.5
  3. NotebookLM

She prompted each tool in turn to read the article carefully and summarise it, ensuring that it covered all key concepts, ideas etc ensuring that I get a thorough understanding of the article and research.

She provides a detailed table of the results of each of the three applications, and additionally of the NotebookLM podcast application, assessing the strengths and weaknesses of each. she says that "while generative AI tools undoubtedly enhance access to information, they also actively “intervene” in the information-sharing process, actively shaping the type and depth of information that we receive, as well as (thanks to changed in format and tone) its meaning. "

She goes on to say:

While popular AI tools are helpful for summarising and simplifying information, when we start to dig into the detail of AI’s outputs we’re reminded that these tools are not objective; they actively “intervene” and shape the information that we consume in ways which could be argued to have a problematic impact on “learning”.

Another thing is also clear: tools like ChatGPT4o, Claude & Notebook are not yet comprehensive “learning tools” or “education apps”. To truly support human learning and deliver effective education, AI tools need to do more than provide access to information—they need to support learners intentionally through carefully selected and sequenced pedagogical stages.  

Her closing thoughts are about Redefining the “Learning” Process . She says:

It’s clear that AI tools like ChatGPT, Claude, and NotebookLM are incredibly valuable for making complex ideas more accessible; they excel in summarisation and simplification, which opens up access to knowledge and helps learners take the first step in their learning journey. However, these tools are not learning tools in the full sense of the term—at least not yet.

By labelling tools like ChatGPT 4oClaude 3.5 & NotebookLM as “learning tools” we perpetuate the common misconception that “learning” is a process of disseminating and absorbing information. In reality, the process of learning is a deeply complex cognitive, social, emotional and psychological one, which exists over time and space and which must be designed and delivered with intention.

AI and Ed: pitfalls but encouraging signs

Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0

In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."

He summarised five major AI pitfalls for education:

  1. Fictional hype on AI’s potentials in addressing real-world challenges
  2. Machine-centrism prevailing over human-centrism and machine agency undermining human agency
  3. Sidelining AI’s harmful impact on environment and ecosystems
  4. Covering up on the AI-driven wealth concentration and widened social inequality
  5. Downgrading AI competencies to operational skills bound to commercial AI platforms

UNESCO has published five guiding principles in their AI competency framework for students:
2.1 Fostering critical thinking on the proportionality of AI for real-world challenges
2.2 Prioritizing competencies for human-centred interaction with AI
2.3 Steering the design and use of more climate-friendly AI
2.4 Promoting inclusivity in AI competency development
2.5 Facilitating transferable AI foundations for lifelong learning

And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.

For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.

That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.

AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:

  • Do so in almost any language of their choice
  • Learn anything, anytime, anywhere on mobile devices or computers
  • Engage in dynamic, open-ended conversations through interactive dialogue
  • Check their knowledge and skills on any topic 
  • Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.

And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.

I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.

AI in ED: Equity, XAI and learner agency

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The AI in education theme continues to gather momentum, resulting in a non stop stream of journal; articles, reports, newsletters. blogs and videos. However, while not diminishing, there seem to be some subtle change in directions in the messages.

Firstly, despite many schools wary of Generative AI, there is a growing realisation that students are going to use it anyway and that the various apps claiming to check student work for AI simply don't work.

At the same time, there is an increasing focus on AI and pedagogy (perhaps linked to the increasing sophistication of Frontier Models from Gen AI but also the realisation that gimmicks like talking to an AI pretending to be someone famous from the past are just lame!). This increased focus on pedagogy is also leading to pressure to involve students. in the application of Gen AI for teaching and learning. And at recent students two ethical questions have emerged. The first is unequal access to AI applications and tools. Inside Higher Ed reports that recent research from the Public Policy Institute of California on disparate access to digital devices and the internet for K-12 students in the nation’s largest state public-school system. Put simply, they say. students who are already at an educational and digital disadvantage because of family income and first-generation constraints are becoming even more so every day as their peers embrace AI at high rates as a productivity tool—and they do not.

And while some tools will remain free, it appears that the most powerful and modern tools will increasingly come at a cost. The U.K. Jisc recently reported that access to a full suite of the most popular generative AI tools and education plug-ins currently available could cost about £1,000 (about $1,275) per year. For many students already accumulating student debt and managing the rising cost of living, paying more than $100 per month for competitive AI tools is simply not viable.

A second issue is motivation and agency for students in using AI tools. It may be that the rush to gamification, inspired by Apps like DuoLingo, is running thin. Perhaps a more subtle and sustained approach is needed to motivate learners. That may increase a focus on learner agency which in turn is being seen as linked to Explainable AI (or XAI for short). Research around Learning Analytics has pointed to the importance of students understanding the use purpose of LA but also being able to understand why the Analytics turns out as it does. And research into Personal Learning Environments has long shown the importance of learner agency in developing meta-cognition in learning. With the development of many applications for personalized learning programmes, it becomes important that learners are able to understand the reasons for their own individual learning pathways and if necessary challenge them.

While earlier debates about AI in Ed ethics, largely focused on technologies, the new debates are more focused on practices in teaching and learning using AI.

Homework Apocolypse?

Catherine Breslin & Tania Duarte / Better Images of AI / AI silicon clouds collage / CC-BY 4.0

November marks two years since the release of Open AI's GPT large language model chatbot. Since then AI, or more specifically Generative AI has dominated the discourse over the future of education. And of course it has spawned hundreds of project resulting in an increasing torrent of research results. Yet on one critical issue - does the use of AI improve learning - there appears little consensus. This is probably because we have no good ways of measuring learning. Instead we use performance in tests and exams as a proxy for learning. And its probably true to say that the debates over AI are turning the heat on the use of such a proxy, just as it is on the essay as the dominant form of assessment in schools and universities.

Last week in his newsletter, One Useful thing, Ethan Mollick talked about the use of AI, cheating and learning in an article entitled 'What comes after the Homework Apocalypse'. It is probably fair to say Ethan is a big fan of AI in education.

To be clear, AI is not the root cause of cheating. Cheating happens because schoolwork is hard and high stakes. And schoolwork is hard and high stakes because learning is not always fun and forms of extrinsic motivation, like grades, are often required to get people to learn. People are exquisitely good at figuring out ways to avoid things they don’t like to do, and, as a major new analysis shows, most people don’t like mental effort. So, they delegate some of that effort to the AI. In general, I am in favor of delegating tasks to AI (the subject of my new class on MasterClass), but education is different - the effort is the point.

He postulated that fall in grades achieved by students in the USA between 2008 and 2017 had been caused by the increasing use of the Internet for homework. Students were simply copying homework answers. And in an experiment in ma high school in Turkey with students using GPT4 grades for homework went up but final exam grades fell. But giving students GPT with a basic tutor prompt for ChatGPT, instead of having them use ChatGPT on their own, boosted homework scores without lowering final exam grades. 

Ethan says this shows "we need to center teachers in the process of using AI, rather than just leaving AI to students (or to those who dream of replacing teachers entirely). We know that almost three-quarters of teachers are already using AI for work, but we have just started to learn the most effective ways for teachers to use AI."

He remains convinced to the value of Generative AI in education. The question now, he says "is not whether AI will change education, but how we will shape that change to create a more effective, equitable, and engaging learning environment for all."