What is the purpose of Vocational Education and Training?

Photo by Jeswin Thomas on Unsplash

Is Artificial Intelligence challenging us to rethink the purpose of Vocational Education and Training? Perhaps that is going too far, but there are signs of questions being asked. For the last twenty five years or so there has been a tendency in most European countries for a narrowing of the aims of VET, driven by an agenda of employability. Workers have become responsible for their own employability under the slogan of Lifelong Learning. Learning to learn has become a core skill for students and apprentices, not to broaden their education but rather to be prepared to update their skills and knowledge to safeguard their employability.

It wasn’t always so. The American philosopher, psychologist, and educational reformer John Dewey believed “the purpose of education should not revolve around the acquisition of a pre-determined set of skills, but rather the realization of one's full potential and the ability to use those skills for the greater good.” The overriding theme of Dewey's work was his profound belief in democracy, be it in politics, education, or communication and journalism and he considered participation, not representation, the essence of democracy.

Faced with the challenge of generative AI, not only to the agency and motivation of learners, but to how knowledge is developed and shared within society, there is a growing understanding that a broader approach to curriculum and learning in Vocational Education and Training is necessary. This includes a more advanced definition of digital literacy to develop a critique of the outputs from Large Language Models. AI literacy is defined as the knowledge and skills necessary to understand, critically evaluate, and effectively use AI technologies (Long & Magerko, 2020) including understanding the capabilities and limitations of AI systems, recognising potential biases and ethical implications of AI-generated content  and developing critical thinking skills to evaluate AI-produced information .

UNESCO says their citizenship education, including the competence frameworks for teachers and for students, builds on peace and human rights principles, cultivating essential skills and values for responsible global citizens. It fosters criticality, creativity, and innovation, promoting a shared sense of humanity and commitment to peace, human rights, and sustainable development. Fenchung Miao from UNESCO has said the AI competency framework for students proposed the term of "AI society citizenship" and provided interpretation in multiple sections. Section 1.3 of the Framework, AI Society Citizenship says:

Students are expected to be able to build critical views on the impact of AI on human societies and expand their human centred values to promoting the design and use of AI for inclusive and sustainable development. They should be able to solidify their civic values and the sense of social responsibility as a citizen in an AI society. Students are also expected to be able to reinforce their open minded attitude and lifelong curiosity about learning and using AI to support self actualisation in the AI era.

The Council of Europe says Vocational Education and Training is an integral part of the entire educational system and shares its broader aim of preparing learners not only for employment, but also for life as active citizens in democratic societies. Social dialogue and corporate social responsibility are seen as tools for democratising AI in work.

Renewing the democratic and civic mission of education underlines the importance of integrating Competences for Democratic Culture (CDC) in VET to promote quality citizenship education. This initiative aims to support VET systems in preparing learners not only for employment but also for active participation as citizens in culturally diverse democratic societies. By embedding CDC in learning processes in VET, the Council of Europe aims to ensure that VET learners acquire the necessary knowledge, skills, values and attitudes to participate fully in democratic life.

The Council of Europe Reference Framework for Democratic Culture and the Unesco AI Competence Framework can provide a focus for a wider understanding of AI competences in VET and provide a challenge for how they can be implemented in practice. 

Such an understanding can shape an educational landscape that leverages AI while safeguarding human agency, motivation, and ethics. As generative AI advances, continuous dialogue and investigation among all educational stakeholders are essential to ensure these technologies enhance learning outcomes and equip students for an AI-driven future.

References

Dewey, J. (1916) Democracy and Education: an introduction to the philosophy of education, New York: Macmillan. https://archive.org/stream/democracyandedu00dewegoog#page/n6/mode/2up. Retrieved 4 May 2024 

Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

UNESCO (2024) AI Competency Framework for Students, https://unesdoc.unesco.org/ark:/48223/pf0000391105

AI and Critical Hype Studies

Clarote & AI4Media / Better Images of AI / Power/Profit / CC-BY 4.0

What ever you think about Artificial Intelligence, it cannot be denied that it has generated - and continues to generate - a lot of hype. And in the AI and education field it feels as if the hype is advancing, perhaps because education is seen as massive market for shiny new (and expensive) toys. So I liked this recent post on LinkedIn by Andreu Belsunces Gonçalves about Critical Hype Studies.

A panel at the EASST/4S conference aimed, he says, “at outlining what the field of critical hype studies could be. Although we're still in the early stages, here are some tentative insights from four days of intense, varied, and lengthy discussions about hype, technology, futures, fiction, narratives, rhetorics, and finances in Amsterdam.”

The Politics of Hype is defined as follows:
a. Hype is an interface between experts and non-experts, leaving non-experts vulnerable to oversimplified, overpromising expert statements.
b. The capacity to produce and disseminate hype is unevenly distributed and generally benefits the already privileged—wealthy, educated, male, Western (i.e., economically powerful countries like the USA or Germany take greater advantage of AI hype).

The post goes on to summarise the conversations and presentations of the two panels reflecting on critical hype studies and hype in the promissory economy.

What are Learning Tools?

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY 4.0

There's an interesting post from Philippa Hardman in her newsletter today. Entitled Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education?  her research asks how much and how well do popular AI tools really support human learning and, in the process, disrupt education?
She created a simple evaluation rubric to explore five key research questions: 

1. Inclusion of Information

2. Exclusion of Information

3. [De]Emphasis of Information

4. Structure & Flow

5. Tone & Style

Philippa Hardman used her own research articles as the input material, which she fed into what she says are considered to be the three big AI tools for learning: 

  1. ChatGPT 4o
  2. Claude 3.5
  3. NotebookLM

She prompted each tool in turn to read the article carefully and summarise it, ensuring that it covered all key concepts, ideas etc ensuring that I get a thorough understanding of the article and research.

She provides a detailed table of the results of each of the three applications, and additionally of the NotebookLM podcast application, assessing the strengths and weaknesses of each. she says that "while generative AI tools undoubtedly enhance access to information, they also actively “intervene” in the information-sharing process, actively shaping the type and depth of information that we receive, as well as (thanks to changed in format and tone) its meaning. "

She goes on to say:

While popular AI tools are helpful for summarising and simplifying information, when we start to dig into the detail of AI’s outputs we’re reminded that these tools are not objective; they actively “intervene” and shape the information that we consume in ways which could be argued to have a problematic impact on “learning”.

Another thing is also clear: tools like ChatGPT4o, Claude & Notebook are not yet comprehensive “learning tools” or “education apps”. To truly support human learning and deliver effective education, AI tools need to do more than provide access to information—they need to support learners intentionally through carefully selected and sequenced pedagogical stages.  

Her closing thoughts are about Redefining the “Learning” Process . She says:

It’s clear that AI tools like ChatGPT, Claude, and NotebookLM are incredibly valuable for making complex ideas more accessible; they excel in summarisation and simplification, which opens up access to knowledge and helps learners take the first step in their learning journey. However, these tools are not learning tools in the full sense of the term—at least not yet.

By labelling tools like ChatGPT 4oClaude 3.5 & NotebookLM as “learning tools” we perpetuate the common misconception that “learning” is a process of disseminating and absorbing information. In reality, the process of learning is a deeply complex cognitive, social, emotional and psychological one, which exists over time and space and which must be designed and delivered with intention.

AI and Motivation for Learning

Photo by Tim Mossholder on Unsplash

Here's the follow up I promised in my last post about earners' and teachers' Agency and Gen AI.

Motivation plays a crucial role in the learning process. As opposed to behaviorist theories of learning,  learners are increasingly seen as active participants in learning leading to a focus on how learners make sense of and choose to engage with their learning environments (National Academies of Sciences, Engineering, and Medicine. 2018). Cognitive theories, for example, have focused on how learners set goals for learning and achievement and how they maintain and monitor their progress toward those goals. While earlier research focused largely on the classroom environment, newer research, especially following the emergency online learning turn during the Covid19 emergency, has looked at the online learning environment mediated by various forms of technology (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. 2021) Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Manjur Kolhar, Raisa Nazir Ahmed Kazi, Abdalla Alameen, 2021).

“Motivation is also increasingly viewed as an emergent phenomenon, meaning it can develop over time and change as a result of one’s experiences with learning and other circumstances”  (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka, 2021) . Research suggests, for example, that aspects of the learning environment can both trigger and sustain a student’s curiosity and interest in ways that support motivation and learning (Hidi and Renninger, 2006). Of course the converse can apply with learning environments reducing motivation.

There are an increasing number of studies looking at how Generative AI impacts on learning and motivation. Yet many of these are attempting to measure the effectiveness of learning and rely on achievement in assessment as a proxy for learning. Excepting learning to program, there is limited evidence from Vocational Education and Training, despite VET being largely learning outcomes based. However, measuring effectiveness and motivation in VET is made more complicated by the many different models of VET provision.

Neither is there any consensus about the efficacy of AI for learning. In his OL Daily Newsletter, Stephen Downes (2024) discusses a  LinkedIn post from Ethan Mollick stating "AI can help learning... when it isn't a crutch." Mollick cites three papers: firstly AI Meets the Classroom: When Does ChatGPT Harm Learning? which states "Using LLMs as personal tutors by asking them for explanations improves learning outcomes whereas excessively asking LLMs to generate solutions impairs learning." Second, Generative AI Can Harm Learning says "students attempt to use GPT-4 as a 'crutch' during practice problem sessions, and when successful, perform worse on their own" though "These negative learning effects are largely mitigated by the safeguards included in GPT Tutor." Third, Effective and Scalable Math Support says "chat-based tutoring solutions leveraging AI could offer a cost-effective and operationally efficient approach to enhancing learning outcomes for millions of students globally." Downes concludes “All these results are, at worst, mixed, and at best, show genuine promise in AI for improving learning.”

Of course, motivation is only one factor in improving learning. Motivation is generally divided between Intrinsic and extrinsic motivation. Generative AI can potentially enhance intrinsic motivation through immediate feedback and adaptive challenges and enabling more creative and open-ended projects that align with students' interests It can also offer novel and engaging ways to interact with learning materials. But Artemova (2024) says “it has been demonstrated that students are primarily involved with learning activities for reasons other than epistemological curiosity or a desire to learn. Instead, they are motivated by the desire to interact with technology or to meet the expectations set by educational software.”

And while extrinsic motivation can be effective, over-reliance on AI-powered reward systems or gamification elements may lead to a focus on external rewards rather than the inherent value of learning. There is also a danger that students might become overly dependent on AI assistance, potentially undermining their confidence in their own abilities and the ease of generating content with AI might lead to questions about the authenticity of student work, potentially impacting intrinsic motivation. A further concern is that the availability of instant AI-generated answers might reduce students' motivation to engage in effortful cognitive processes.

Inna Artemova (2024) who has undertaken an analysis of 69 articles for her paper Digital Education Review ‘Bridging Motivation and AI in Education: An Activity Theory Perspective’ concludes “that in 56 research papers motivation is seen as extrinsic, which implies a greater involvement of students in the learning process due to increased interactivity and adaptability of the content (Yang et al., 2020). Through text analysis, it is clear that this type of motivation is driven by motives-stimuli, such as personalised learning environments (Bulathwela et al., 2024), which in fact means that motivation in this case is secondary to the AI implementation and is guided by the AI.”

If I may add a personal viewpoint drawn from my own learning of Spanish using the popular DuoLingo online learning environment, which is heavily gamified and provides personalised learning content, it develops both intrinsic and extrinsic motivation. Particularly effective is the exhortation to practise regularly using the idea of a ‘streak’ based on how many continuous days you have accessed the application (although it also allows a limited streak freeze. I have now been learning on DuoLingo for a three year streak. How effective my learning has proved to be is another question.

Clearly, as with so much on AI and education, this is an emergent area of research with contested viewpoints. But we would tentatively conclude that while Generative AI offers many opportunities to enhance motivation, it may also present challenges that need to be addressed. Educators must be aware of these potential pitfalls and develop strategies to maintain healthy motivational patterns in AI-enhanced learning environments.

References

Bastani, B., Bastani, O., ASangu A., Ge, H., Kabakcı. O., Mariman, R.  (2024) Generative AI Can Harm Learning, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486

Bulathwela, S., Pérez-Ortiz, M., Holloway, C., Cukurova, M., & Shawe-Taylor, J. (2024). Artificial Intelligence Alone Will Not Democratise Education: On Educational Inequality, Techno-Solutionism and Inclusive Tools. Sustainability, 16(2), 781. https://doi.org/10.3390/su16020781

Downes, S. (2024) Student use of LLMs can inhibit learning, https://www.downes.ca/post/77127

Henkel, H., 1, Horne-Robinson, H., Kozhakhmetova, N., Lee, A. Effective and Scalable Math Support: Experimental Evidence on the Impact of an AI- Math Tutor in Ghana . https://arxiv.org/ftp/arxiv/papers/2402/2402.09809.pdf

Hidi, S., & Renninger, K. A. (2006). The Four-Phase Model of Interest Development. Educational Psychologist, 41(2), 111–127. https://doi.org/10.1207/s15326985ep4102_4

Inna Artemova (2024) Bridging Motivation and AI in Education: An Activity Theory Perspective, in Digital Education Review, https://revistes.ub.edu/index.php/der/article/view/46120

Kolhar, M., Nazir R., Kazi, A., Alameen, A. (2021) Effect of social media use on learning, social interactions, and sleep duration among university students, Saudi Journal of Biological Sciences, Volume 28, Issue 4,

Matthias Lehmann,M., Cornelius, P., Sting F.  (2024) AI Meets the Classroom:

Mollick, E. (2024) https://www.linkedin.com/posts/emollick_ai-can-help-learning-when-it-isnt-a-crutch-activity-7250556786640924672-Enhg/

National Academies of Sciences, Engineering, and Medicine. 2018. How People Learn II: Learners, Contexts, and Cultures. Washington, DC: The National Academies Press. https://doi.org/10.17226/24783.

Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. (2021) Motivating Online Learning: The Challenges of COVID-19 and Beyond, https://link.springer.com/content/pdf/10.1007/s40299-021-00566-w.pdf

When Does ChatGPT Harm Learning?, https://arxiv.org/pdf/2409.09047v1

Yang, D., Oh, E.-S., Wang, Y. (2020). Hybrid Physical Education Teaching and Curriculum Design Based on a Voice Interactive Artificial Intelligence Educational Robot. Sustainability,12(19), 8000. https://doi.org/10.3390/su12198000

Teachers’ and Learners’ Agency and Generative AI

XK Studio & Google DeepMind / Better Images of AI / AI Lands / CC-BY 4.0

It is true that there is plenty being written about AI in education  - almost to the extent that it is the only thing being written about education. But as usual – few people are talking about Vocational Education and Training. And the discourse appears to almostdefault to a techno-determinist standpoint – whether by intention or not. Thus while reams are written on how to prompt Large Language Models little is being said about the pedagogy of AI. All technology applications favour and facilitate or hinder and block pedagogies whether hidden or not (Attwell G and Hughes J. 2010) .

I got into thinking more about this as a result of two strands of work I am doing presently – one for the EU Digital Education hub on the explicability of AI in education and the second work for the Council of Europe who are developing a Reference Framework for Democratic Culture in VET. I was also interested in a worry expressed by Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, that  machine-centrism is prevailing over human-centrism and machine agency undermining human agency.

Research undertaken into Personal Learning Environments (Buchem, I, Attwell, G. and Torres, R., 2011) and into the impact of online learning during the Covid 19 pandemic have pointed to the importance of agency for learning. For a fairer, usefully transparent and more responsible online environment. Virginia Portillo et Al (2024) say young people have “a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.”

The previous research into Personal Learning Environments suggests that agency is central to the development of Self Regulated Learning (SRL) which is important for Lifelong Learning and Vocational Education and Training. Self Regulated Learning is “the process whereby students activate and sustain cognition, behaviors, and affects, which are systematically oriented toward attainment of their goals" (Schunk & Zimmerman, 1994).   And SRL drives the “cognitive, metacognitive, and motivational strategies that learners employ to manage their learning (Panadero, 2017). “

Metacognitive strategies guide learners’ use of cognitive strategies to achieve their goals, including setting goals, monitoring learning progress, seeking help, and reflecting on whether the strategies used to meet the goal were useful (Pintrich, 2004; Zimmerman, 2008).

The introduction of generative AI in education raises important questions about learner agency. Agency refers here to the capacity of individuals to act independently and make their own free choices (Bandura, 2001). In the context of AI-enhanced learning, agency can be both supported and challenged in several ways. In a recent paper, ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’, Hidalgo, C. (2024) identifies three different approaches to agency related to AI for education. The first is how AI systems have been developed throughout their lifecycle to serve human agency. The second is human beings’ capacity to exert their rights by controlling the decision-making process in the interaction with AI. The third is that people should be able to understand AI’s impact on their lives and how to benefit from the best of what AI offers. Cesar Hildago says: “These three understandings entail different forms of responsibility for the actors involved in the design, development, and use of AI in education. Understanding the differences can guide lawmakers, research communities, and educational practitioners to identify the actors’ roles and responsibility to ensure student and teacher agency.”

Generative AI can provide personalized learning experiences tailored to individual students' needs, potentially enhancing their sense of agency by allowing them to progress at their own pace and focus on areas of personal interest. However, this personalization may also raise concerns about the AI system's influence on learning paths and decision-making processes. In a new book "Creative Applications of Artificial Intelligence in Education" Alex U. and Margarida Romero (Editors) explore creative applications of across various levels, from K-12 to higher education and professional training. The book addresses key topics such as preserving teacher and student agency, digital acculturation, citizenship in the AI era, and international initiatives supporting AI integration in education. The book also examines students' perspectives on AI use in education, affordances for AI-enhanced digital game-based learning, and the impact of generative AI in higher education.

To foster agency using Generative AI they propose the following:

1. Involve students in decision-making processes regarding AI implementation in their education.

2. Teach critical thinking skills to help students evaluate and question AI-generated content.

3. Encourage students to use AI as a tool for enhancing their creativity rather than replacing it.

4. Provide opportunities for students to customize their learning experiences using AI.

5. Maintain a balance between AI-assisted learning and traditional human-led instruction.

Agency is also strongly interlinked to motivation for learning. This will be the subject of a further blog post.

References

Alex U. and Margarida Romero (Editors) (2024) Creative Applications of Artificial Intelligence in Education, https://link.springer.com/book/10.1007/978-3-031-55272-4#keywords

Attwell G and Hughes J. (2010) Pedagogic approaches to using technology for learning: literature review, https://www.researchgate.net/publication/279510494_Pedagogic_approaches_to_using_technology_for_learning_literature_review

Bandura, A. (2001) Social Cognitive Theory of Mass Communication, Media Psychology}, volume 3, pp 265 - 299}, https://api.semanticscholar.org/CorpusID:35687430}

Buchem, I, Attwell, G. Torres R. (2011)  Understanding Personal Learning Environments: Literature review and synthesis through the Activity Theory lens, https://www.researchgate.net/publication/277729312_Understanding_Personal_Learning_Environments_Literature_review_and_synthesis_through_the_Activity_Theory_lens

Hidalgo, C. (2024), ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’ In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14830. Springer, Cham. https://doi.org/10.1007/978-3-031-64299-9_27

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2017.00422

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.

Schunk, D. H., & Zimmerman, B. J. (1994). Self-regulation of learning and performance: Issues and educational applications. Lawrence Erlbaum Associates Inc. Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future