There's an interesting post from Philippa Hardman in her newsletter today. Entitled Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education? her research asks how much and how well do popular AI tools really support human learning and, in the process, disrupt education? She created a simple evaluation rubric to explore five key research questions:
1. Inclusion of Information
2. Exclusion of Information
3. [De]Emphasis of Information
4. Structure & Flow
5. Tone & Style
Philippa Hardman used her own research articles as the input material, which she fed into what she says are considered to be the three big AI tools for learning:
She prompted each tool in turn to read the article carefully and summarise it, ensuring that it covered all key concepts, ideas etc ensuring that I get a thorough understanding of the article and research.
She provides a detailed table of the results of each of the three applications, and additionally of the NotebookLM podcast application, assessing the strengths and weaknesses of each. she says that "while generative AI tools undoubtedly enhance access to information, they also actively “intervene” in the information-sharing process, actively shaping the type and depth of information that we receive, as well as (thanks to changed in format and tone) its meaning. "
She goes on to say:
While popular AI tools are helpful for summarising and simplifying information, when we start to dig into the detail of AI’s outputs we’re reminded that these tools are not objective; they actively “intervene” and shape the information that we consume in ways which could be argued to have a problematic impact on “learning”.
Another thing is also clear: tools like ChatGPT4o, Claude & Notebook are not yet comprehensive “learning tools” or “education apps”. To truly support human learning and deliver effective education, AI tools need to do more than provide access to information—they need to support learners intentionally through carefully selected and sequenced pedagogical stages.
Her closing thoughts are about Redefining the “Learning” Process . She says:
It’s clear that AI tools like ChatGPT, Claude, and NotebookLM are incredibly valuable for making complex ideas more accessible; they excel in summarisation and simplification, which opens up access to knowledge and helps learners take the first step in their learning journey. However, these tools are not learning tools in the full sense of the term—at least not yet.
By labelling tools like ChatGPT 4o, Claude 3.5 & NotebookLM as “learning tools” we perpetuate the common misconception that “learning” is a process of disseminating and absorbing information. In reality, the process of learning is a deeply complex cognitive, social, emotional and psychological one, which exists over time and space and which must be designed and delivered with intention.
Here's the follow up I promised in my last post about earners' and teachers' Agency and Gen AI.
Motivation plays a crucial role in the learning process. As opposed to behaviorist theories of learning, learners are increasingly seen as active participants in learning leading to a focus on how learners make sense of and choose to engage with their learning environments (National Academies of Sciences, Engineering, and Medicine. 2018). Cognitive theories, for example, have focused on how learners set goals for learning and achievement and how they maintain and monitor their progress toward those goals. While earlier research focused largely on the classroom environment, newer research, especially following the emergency online learning turn during the Covid19 emergency, has looked at the online learning environment mediated by various forms of technology (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. 2021) Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Manjur Kolhar, Raisa Nazir Ahmed Kazi, Abdalla Alameen, 2021).
“Motivation is also increasingly viewed as an emergent phenomenon, meaning it can develop over time and change as a result of one’s experiences with learning and other circumstances” (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka, 2021) . Research suggests, for example, that aspects of the learning environment can both trigger and sustain a student’s curiosity and interest in ways that support motivation and learning (Hidi and Renninger, 2006). Of course the converse can apply with learning environments reducing motivation.
There are an increasing number of studies looking at how Generative AI impacts on learning and motivation. Yet many of these are attempting to measure the effectiveness of learning and rely on achievement in assessment as a proxy for learning. Excepting learning to program, there is limited evidence from Vocational Education and Training, despite VET being largely learning outcomes based. However, measuring effectiveness and motivation in VET is made more complicated by the many different models of VET provision.
Neither is there any consensus about the efficacy of AI for learning. In his OL Daily Newsletter, Stephen Downes (2024) discusses a LinkedIn post from Ethan Mollick stating "AI can help learning... when it isn't a crutch." Mollick cites three papers: firstly AI Meets the Classroom: When Does ChatGPT Harm Learning? which states "Using LLMs as personal tutors by asking them for explanations improves learning outcomes whereas excessively asking LLMs to generate solutions impairs learning." Second, Generative AI Can Harm Learning says "students attempt to use GPT-4 as a 'crutch' during practice problem sessions, and when successful, perform worse on their own" though "These negative learning effects are largely mitigated by the safeguards included in GPT Tutor." Third, Effective and Scalable Math Support says "chat-based tutoring solutions leveraging AI could offer a cost-effective and operationally efficient approach to enhancing learning outcomes for millions of students globally." Downes concludes “All these results are, at worst, mixed, and at best, show genuine promise in AI for improving learning.”
Of course, motivation is only one factor in improving learning. Motivation is generally divided between Intrinsic and extrinsic motivation. Generative AI can potentially enhance intrinsic motivation through immediate feedback and adaptive challenges and enabling more creative and open-ended projects that align with students' interests It can also offer novel and engaging ways to interact with learning materials. But Artemova (2024) says “it has been demonstrated that students are primarily involved with learning activities for reasons other than epistemological curiosity or a desire to learn. Instead, they are motivated by the desire to interact with technology or to meet the expectations set by educational software.”
And while extrinsic motivation can be effective, over-reliance on AI-powered reward systems or gamification elements may lead to a focus on external rewards rather than the inherent value of learning. There is also a danger that students might become overly dependent on AI assistance, potentially undermining their confidence in their own abilities and the ease of generating content with AI might lead to questions about the authenticity of student work, potentially impacting intrinsic motivation. A further concern is that the availability of instant AI-generated answers might reduce students' motivation to engage in effortful cognitive processes.
Inna Artemova (2024) who has undertaken an analysis of 69 articles for her paper Digital Education Review ‘Bridging Motivation and AI in Education: An Activity Theory Perspective’ concludes “that in 56 research papers motivation is seen as extrinsic, which implies a greater involvement of students in the learning process due to increased interactivity and adaptability of the content (Yang et al., 2020). Through text analysis, it is clear that this type of motivation is driven by motives-stimuli, such as personalised learning environments (Bulathwela et al., 2024), which in fact means that motivation in this case is secondary to the AI implementation and is guided by the AI.”
If I may add a personal viewpoint drawn from my own learning of Spanish using the popular DuoLingo online learning environment, which is heavily gamified and provides personalised learning content, it develops both intrinsic and extrinsic motivation. Particularly effective is the exhortation to practise regularly using the idea of a ‘streak’ based on how many continuous days you have accessed the application (although it also allows a limited streak freeze. I have now been learning on DuoLingo for a three year streak. How effective my learning has proved to be is another question.
Clearly, as with so much on AI and education, this is an emergent area of research with contested viewpoints. But we would tentatively conclude that while Generative AI offers many opportunities to enhance motivation, it may also present challenges that need to be addressed. Educators must be aware of these potential pitfalls and develop strategies to maintain healthy motivational patterns in AI-enhanced learning environments.
Bulathwela, S., Pérez-Ortiz, M., Holloway, C., Cukurova, M., & Shawe-Taylor, J. (2024). Artificial Intelligence Alone Will Not Democratise Education: On Educational Inequality, Techno-Solutionism and Inclusive Tools. Sustainability, 16(2), 781. https://doi.org/10.3390/su16020781
Downes, S. (2024) Student use of LLMs can inhibit learning, https://www.downes.ca/post/77127
Henkel, H., 1, Horne-Robinson, H., Kozhakhmetova, N., Lee, A. Effective and Scalable Math Support: Experimental Evidence on the Impact of an AI- Math Tutor in Ghana . https://arxiv.org/ftp/arxiv/papers/2402/2402.09809.pdf
Kolhar, M., Nazir R., Kazi, A., Alameen, A. (2021) Effect of social media use on learning, social interactions, and sleep duration among university students, Saudi Journal of Biological Sciences, Volume 28, Issue 4,
Matthias Lehmann,M., Cornelius, P., Sting F. (2024) AI Meets the Classroom:
Mollick, E. (2024) https://www.linkedin.com/posts/emollick_ai-can-help-learning-when-it-isnt-a-crutch-activity-7250556786640924672-Enhg/
National Academies of Sciences, Engineering, and Medicine. 2018. How People Learn II: Learners, Contexts, and Cultures. Washington, DC: The National Academies Press. https://doi.org/10.17226/24783.
Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. (2021) Motivating Online Learning: The Challenges of COVID-19 and Beyond, https://link.springer.com/content/pdf/10.1007/s40299-021-00566-w.pdf
Yang, D., Oh, E.-S., Wang, Y. (2020). Hybrid Physical Education Teaching and Curriculum Design Based on a Voice Interactive Artificial Intelligence Educational Robot. Sustainability,12(19), 8000. https://doi.org/10.3390/su12198000
It is true that there is plenty being written about AI in education - almost to the extent that it is the only thing being written about education. But as usual – few people are talking about Vocational Education and Training. And the discourse appears to almostdefault to a techno-determinist standpoint – whether by intention or not. Thus while reams are written on how to prompt Large Language Models little is being said about the pedagogy of AI. All technology applications favour and facilitate or hinder and block pedagogies whether hidden or not (Attwell G and Hughes J. 2010) .
I got into thinking more about this as a result of two strands of work I am doing presently – one for the EU Digital Education hub on the explicability of AI in education and the second work for the Council of Europe who are developing a Reference Framework for Democratic Culture in VET. I was also interested in a worry expressed by Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, that machine-centrism is prevailing over human-centrism and machine agency undermining human agency.
Research undertaken into Personal Learning Environments (Buchem, I, Attwell, G. and Torres, R., 2011) and into the impact of online learning during the Covid 19 pandemic have pointed to the importance of agency for learning. For a fairer, usefully transparent and more responsible online environment. Virginia Portillo et Al (2024) say young people have “a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.”
The previous research into Personal Learning Environments suggests that agency is central to the development of Self Regulated Learning (SRL) which is important for Lifelong Learning and Vocational Education and Training. Self Regulated Learning is “the process whereby students activate and sustain cognition, behaviors, and affects, which are systematically oriented toward attainment of their goals" (Schunk & Zimmerman, 1994). And SRL drives the “cognitive, metacognitive, and motivational strategies that learners employ to manage their learning (Panadero, 2017). “
Metacognitive strategies guide learners’ use of cognitive strategies to achieve their goals, including setting goals, monitoring learning progress, seeking help, and reflecting on whether the strategies used to meet the goal were useful (Pintrich, 2004; Zimmerman, 2008).
The introduction of generative AI in education raises important questions about learner agency. Agency refers here to the capacity of individuals to act independently and make their own free choices (Bandura, 2001). In the context of AI-enhanced learning, agency can be both supported and challenged in several ways. In a recent paper, ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’, Hidalgo, C. (2024) identifies three different approaches to agency related to AI for education. The first is how AI systems have been developed throughout their lifecycle to serve human agency. The second is human beings’ capacity to exert their rights by controlling the decision-making process in the interaction with AI. The third is that people should be able to understand AI’s impact on their lives and how to benefit from the best of what AI offers. Cesar Hildago says: “These three understandings entail different forms of responsibility for the actors involved in the design, development, and use of AI in education. Understanding the differences can guide lawmakers, research communities, and educational practitioners to identify the actors’ roles and responsibility to ensure student and teacher agency.”
Generative AI can provide personalized learning experiences tailored to individual students' needs, potentially enhancing their sense of agency by allowing them to progress at their own pace and focus on areas of personal interest. However, this personalization may also raise concerns about the AI system's influence on learning paths and decision-making processes. In a new book "Creative Applications of Artificial Intelligence in Education" Alex U. and Margarida Romero (Editors) explore creative applications of across various levels, from K-12 to higher education and professional training. The book addresses key topics such as preserving teacher and student agency, digital acculturation, citizenship in the AI era, and international initiatives supporting AI integration in education. The book also examines students' perspectives on AI use in education, affordances for AI-enhanced digital game-based learning, and the impact of generative AI in higher education.
To foster agency using Generative AI they propose the following:
1. Involve students in decision-making processes regarding AI implementation in their education.
2. Teach critical thinking skills to help students evaluate and question AI-generated content.
3. Encourage students to use AI as a tool for enhancing their creativity rather than replacing it.
4. Provide opportunities for students to customize their learning experiences using AI.
5. Maintain a balance between AI-assisted learning and traditional human-led instruction.
Agency is also strongly interlinked to motivation for learning. This will be the subject of a further blog post.
References
Alex U. and Margarida Romero (Editors) (2024) Creative Applications of Artificial Intelligence in Education, https://link.springer.com/book/10.1007/978-3-031-55272-4#keywords
Attwell G and Hughes J. (2010) Pedagogic approaches to using technology for learning: literature review, https://www.researchgate.net/publication/279510494_Pedagogic_approaches_to_using_technology_for_learning_literature_review
Bandura, A. (2001) Social Cognitive Theory of Mass Communication, Media Psychology}, volume 3, pp 265 - 299}, https://api.semanticscholar.org/CorpusID:35687430}
Buchem, I, Attwell, G. Torres R. (2011) Understanding Personal Learning Environments: Literature review and synthesis through the Activity Theory lens, https://www.researchgate.net/publication/277729312_Understanding_Personal_Learning_Environments_Literature_review_and_synthesis_through_the_Activity_Theory_lens
Hidalgo, C. (2024), ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’ In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14830. Springer, Cham. https://doi.org/10.1007/978-3-031-64299-9_27
Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2017.00422
Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.
Schunk, D. H., & Zimmerman, B. J. (1994). Self-regulation of learning and performance: Issues and educational applications. Lawrence Erlbaum Associates Inc. Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future
Arguments over what data should be allowed to be used for training Large Language Models rumble on. Ironically it is LinkedIn which hosts hundreds of discussion is AI which is the latest villain.
The platform updated its policies to clarify data collection practices, but this led to user backlash and increased scrutiny over privacy violations. The lack of transparency regarding data usage and the automatic enrollment of users in AI training has resulted in a significant loss of trust. Users have expressed feeling blindsided by LinkedIn's practices.
In response to user concerns, LinkedIn has committed to updating its user agreements and improving data practices. However, skepticism remains among users regarding the effectiveness of these measures. LinkedIn has provided users with the option to opt out of AI training features through account settings. However, this does not eliminate previously collected data, leaving users uneasy about data handling.
However, it is worth noting that accounts from Europe are not affected at present. It seems that LinkedIn would be breaking European laws if they were to try to do the same within the European Union.
More generally, the UK Open Data Institute says "there is very little transparency about the data used in AI systems - a fact that is causing growing concern as these systems are increasingly deployed with real-world consequences. Key transparency information about data sources, copyright, and inclusion of personal information and more is rarely included by systems flagged within the Partnership for AI’s AI Incidents Database.
While transparency cannot be considered a ‘silver bullet’ for addressing the ethical challenges associated with AI systems, or building trust, it is a prerequisite for informed decision-making and other forms of intervention like regulation."
In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."
He summarised five major AI pitfalls for education:
Fictional hype on AI’s potentials in addressing real-world challenges
Machine-centrism prevailing over human-centrism and machine agency undermining human agency
Sidelining AI’s harmful impact on environment and ecosystems
Covering up on the AI-driven wealth concentration and widened social inequality
Downgrading AI competencies to operational skills bound to commercial AI platforms
UNESCO has published five guiding principles in their AI competency framework for students: 2.1 Fostering critical thinking on the proportionality of AI for real-world challenges 2.2 Prioritizing competencies for human-centred interaction with AI 2.3 Steering the design and use of more climate-friendly AI 2.4 Promoting inclusivity in AI competency development 2.5 Facilitating transferable AI foundations for lifelong learning
And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.
For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.
That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.
AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:
Do so in almost any language of their choice
Learn anything, anytime, anywhere on mobile devices or computers
Engage in dynamic, open-ended conversations through interactive dialogue
Check their knowledge and skills on any topic
Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.
And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.
I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.