AI and Motivation for Learning

Photo by Tim Mossholder on Unsplash

Here's the follow up I promised in my last post about earners' and teachers' Agency and Gen AI.

Motivation plays a crucial role in the learning process. As opposed to behaviorist theories of learning,  learners are increasingly seen as active participants in learning leading to a focus on how learners make sense of and choose to engage with their learning environments (National Academies of Sciences, Engineering, and Medicine. 2018). Cognitive theories, for example, have focused on how learners set goals for learning and achievement and how they maintain and monitor their progress toward those goals. While earlier research focused largely on the classroom environment, newer research, especially following the emergency online learning turn during the Covid19 emergency, has looked at the online learning environment mediated by various forms of technology (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. 2021) Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Social interactions mediated by technology affect learning through their impacts on students’ goals, beliefs, affect, and actions (Manjur Kolhar, Raisa Nazir Ahmed Kazi, Abdalla Alameen, 2021).

“Motivation is also increasingly viewed as an emergent phenomenon, meaning it can develop over time and change as a result of one’s experiences with learning and other circumstances”  (Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka, 2021) . Research suggests, for example, that aspects of the learning environment can both trigger and sustain a student’s curiosity and interest in ways that support motivation and learning (Hidi and Renninger, 2006). Of course the converse can apply with learning environments reducing motivation.

There are an increasing number of studies looking at how Generative AI impacts on learning and motivation. Yet many of these are attempting to measure the effectiveness of learning and rely on achievement in assessment as a proxy for learning. Excepting learning to program, there is limited evidence from Vocational Education and Training, despite VET being largely learning outcomes based. However, measuring effectiveness and motivation in VET is made more complicated by the many different models of VET provision.

Neither is there any consensus about the efficacy of AI for learning. In his OL Daily Newsletter, Stephen Downes (2024) discusses a  LinkedIn post from Ethan Mollick stating "AI can help learning... when it isn't a crutch." Mollick cites three papers: firstly AI Meets the Classroom: When Does ChatGPT Harm Learning? which states "Using LLMs as personal tutors by asking them for explanations improves learning outcomes whereas excessively asking LLMs to generate solutions impairs learning." Second, Generative AI Can Harm Learning says "students attempt to use GPT-4 as a 'crutch' during practice problem sessions, and when successful, perform worse on their own" though "These negative learning effects are largely mitigated by the safeguards included in GPT Tutor." Third, Effective and Scalable Math Support says "chat-based tutoring solutions leveraging AI could offer a cost-effective and operationally efficient approach to enhancing learning outcomes for millions of students globally." Downes concludes “All these results are, at worst, mixed, and at best, show genuine promise in AI for improving learning.”

Of course, motivation is only one factor in improving learning. Motivation is generally divided between Intrinsic and extrinsic motivation. Generative AI can potentially enhance intrinsic motivation through immediate feedback and adaptive challenges and enabling more creative and open-ended projects that align with students' interests It can also offer novel and engaging ways to interact with learning materials. But Artemova (2024) says “it has been demonstrated that students are primarily involved with learning activities for reasons other than epistemological curiosity or a desire to learn. Instead, they are motivated by the desire to interact with technology or to meet the expectations set by educational software.”

And while extrinsic motivation can be effective, over-reliance on AI-powered reward systems or gamification elements may lead to a focus on external rewards rather than the inherent value of learning. There is also a danger that students might become overly dependent on AI assistance, potentially undermining their confidence in their own abilities and the ease of generating content with AI might lead to questions about the authenticity of student work, potentially impacting intrinsic motivation. A further concern is that the availability of instant AI-generated answers might reduce students' motivation to engage in effortful cognitive processes.

Inna Artemova (2024) who has undertaken an analysis of 69 articles for her paper Digital Education Review ‘Bridging Motivation and AI in Education: An Activity Theory Perspective’ concludes “that in 56 research papers motivation is seen as extrinsic, which implies a greater involvement of students in the learning process due to increased interactivity and adaptability of the content (Yang et al., 2020). Through text analysis, it is clear that this type of motivation is driven by motives-stimuli, such as personalised learning environments (Bulathwela et al., 2024), which in fact means that motivation in this case is secondary to the AI implementation and is guided by the AI.”

If I may add a personal viewpoint drawn from my own learning of Spanish using the popular DuoLingo online learning environment, which is heavily gamified and provides personalised learning content, it develops both intrinsic and extrinsic motivation. Particularly effective is the exhortation to practise regularly using the idea of a ‘streak’ based on how many continuous days you have accessed the application (although it also allows a limited streak freeze. I have now been learning on DuoLingo for a three year streak. How effective my learning has proved to be is another question.

Clearly, as with so much on AI and education, this is an emergent area of research with contested viewpoints. But we would tentatively conclude that while Generative AI offers many opportunities to enhance motivation, it may also present challenges that need to be addressed. Educators must be aware of these potential pitfalls and develop strategies to maintain healthy motivational patterns in AI-enhanced learning environments.

References

Bastani, B., Bastani, O., ASangu A., Ge, H., Kabakcı. O., Mariman, R.  (2024) Generative AI Can Harm Learning, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486

Bulathwela, S., Pérez-Ortiz, M., Holloway, C., Cukurova, M., & Shawe-Taylor, J. (2024). Artificial Intelligence Alone Will Not Democratise Education: On Educational Inequality, Techno-Solutionism and Inclusive Tools. Sustainability, 16(2), 781. https://doi.org/10.3390/su16020781

Downes, S. (2024) Student use of LLMs can inhibit learning, https://www.downes.ca/post/77127

Henkel, H., 1, Horne-Robinson, H., Kozhakhmetova, N., Lee, A. Effective and Scalable Math Support: Experimental Evidence on the Impact of an AI- Math Tutor in Ghana . https://arxiv.org/ftp/arxiv/papers/2402/2402.09809.pdf

Hidi, S., & Renninger, K. A. (2006). The Four-Phase Model of Interest Development. Educational Psychologist, 41(2), 111–127. https://doi.org/10.1207/s15326985ep4102_4

Inna Artemova (2024) Bridging Motivation and AI in Education: An Activity Theory Perspective, in Digital Education Review, https://revistes.ub.edu/index.php/der/article/view/46120

Kolhar, M., Nazir R., Kazi, A., Alameen, A. (2021) Effect of social media use on learning, social interactions, and sleep duration among university students, Saudi Journal of Biological Sciences, Volume 28, Issue 4,

Matthias Lehmann,M., Cornelius, P., Sting F.  (2024) AI Meets the Classroom:

Mollick, E. (2024) https://www.linkedin.com/posts/emollick_ai-can-help-learning-when-it-isnt-a-crutch-activity-7250556786640924672-Enhg/

National Academies of Sciences, Engineering, and Medicine. 2018. How People Learn II: Learners, Contexts, and Cultures. Washington, DC: The National Academies Press. https://doi.org/10.17226/24783.

Thomas K. F. Chiu, Tzung-Jin Lin, and Kirsti Lonka. (2021) Motivating Online Learning: The Challenges of COVID-19 and Beyond, https://link.springer.com/content/pdf/10.1007/s40299-021-00566-w.pdf

When Does ChatGPT Harm Learning?, https://arxiv.org/pdf/2409.09047v1

Yang, D., Oh, E.-S., Wang, Y. (2020). Hybrid Physical Education Teaching and Curriculum Design Based on a Voice Interactive Artificial Intelligence Educational Robot. Sustainability,12(19), 8000. https://doi.org/10.3390/su12198000

Teachers’ and Learners’ Agency and Generative AI

XK Studio & Google DeepMind / Better Images of AI / AI Lands / CC-BY 4.0

It is true that there is plenty being written about AI in education  - almost to the extent that it is the only thing being written about education. But as usual – few people are talking about Vocational Education and Training. And the discourse appears to almostdefault to a techno-determinist standpoint – whether by intention or not. Thus while reams are written on how to prompt Large Language Models little is being said about the pedagogy of AI. All technology applications favour and facilitate or hinder and block pedagogies whether hidden or not (Attwell G and Hughes J. 2010) .

I got into thinking more about this as a result of two strands of work I am doing presently – one for the EU Digital Education hub on the explicability of AI in education and the second work for the Council of Europe who are developing a Reference Framework for Democratic Culture in VET. I was also interested in a worry expressed by Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, that  machine-centrism is prevailing over human-centrism and machine agency undermining human agency.

Research undertaken into Personal Learning Environments (Buchem, I, Attwell, G. and Torres, R., 2011) and into the impact of online learning during the Covid 19 pandemic have pointed to the importance of agency for learning. For a fairer, usefully transparent and more responsible online environment. Virginia Portillo et Al (2024) say young people have “a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.”

The previous research into Personal Learning Environments suggests that agency is central to the development of Self Regulated Learning (SRL) which is important for Lifelong Learning and Vocational Education and Training. Self Regulated Learning is “the process whereby students activate and sustain cognition, behaviors, and affects, which are systematically oriented toward attainment of their goals" (Schunk & Zimmerman, 1994).   And SRL drives the “cognitive, metacognitive, and motivational strategies that learners employ to manage their learning (Panadero, 2017). “

Metacognitive strategies guide learners’ use of cognitive strategies to achieve their goals, including setting goals, monitoring learning progress, seeking help, and reflecting on whether the strategies used to meet the goal were useful (Pintrich, 2004; Zimmerman, 2008).

The introduction of generative AI in education raises important questions about learner agency. Agency refers here to the capacity of individuals to act independently and make their own free choices (Bandura, 2001). In the context of AI-enhanced learning, agency can be both supported and challenged in several ways. In a recent paper, ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’, Hidalgo, C. (2024) identifies three different approaches to agency related to AI for education. The first is how AI systems have been developed throughout their lifecycle to serve human agency. The second is human beings’ capacity to exert their rights by controlling the decision-making process in the interaction with AI. The third is that people should be able to understand AI’s impact on their lives and how to benefit from the best of what AI offers. Cesar Hildago says: “These three understandings entail different forms of responsibility for the actors involved in the design, development, and use of AI in education. Understanding the differences can guide lawmakers, research communities, and educational practitioners to identify the actors’ roles and responsibility to ensure student and teacher agency.”

Generative AI can provide personalized learning experiences tailored to individual students' needs, potentially enhancing their sense of agency by allowing them to progress at their own pace and focus on areas of personal interest. However, this personalization may also raise concerns about the AI system's influence on learning paths and decision-making processes. In a new book "Creative Applications of Artificial Intelligence in Education" Alex U. and Margarida Romero (Editors) explore creative applications of across various levels, from K-12 to higher education and professional training. The book addresses key topics such as preserving teacher and student agency, digital acculturation, citizenship in the AI era, and international initiatives supporting AI integration in education. The book also examines students' perspectives on AI use in education, affordances for AI-enhanced digital game-based learning, and the impact of generative AI in higher education.

To foster agency using Generative AI they propose the following:

1. Involve students in decision-making processes regarding AI implementation in their education.

2. Teach critical thinking skills to help students evaluate and question AI-generated content.

3. Encourage students to use AI as a tool for enhancing their creativity rather than replacing it.

4. Provide opportunities for students to customize their learning experiences using AI.

5. Maintain a balance between AI-assisted learning and traditional human-led instruction.

Agency is also strongly interlinked to motivation for learning. This will be the subject of a further blog post.

References

Alex U. and Margarida Romero (Editors) (2024) Creative Applications of Artificial Intelligence in Education, https://link.springer.com/book/10.1007/978-3-031-55272-4#keywords

Attwell G and Hughes J. (2010) Pedagogic approaches to using technology for learning: literature review, https://www.researchgate.net/publication/279510494_Pedagogic_approaches_to_using_technology_for_learning_literature_review

Bandura, A. (2001) Social Cognitive Theory of Mass Communication, Media Psychology}, volume 3, pp 265 - 299}, https://api.semanticscholar.org/CorpusID:35687430}

Buchem, I, Attwell, G. Torres R. (2011)  Understanding Personal Learning Environments: Literature review and synthesis through the Activity Theory lens, https://www.researchgate.net/publication/277729312_Understanding_Personal_Learning_Environments_Literature_review_and_synthesis_through_the_Activity_Theory_lens

Hidalgo, C. (2024), ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’ In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14830. Springer, Cham. https://doi.org/10.1007/978-3-031-64299-9_27

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2017.00422

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.

Schunk, D. H., & Zimmerman, B. J. (1994). Self-regulation of learning and performance: Issues and educational applications. Lawrence Erlbaum Associates Inc. Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future

Pedagogical Approaches and Google LM Notebooks

Some ten or so years ago myself and Jenny Hughes were commissioned by Lifelong Learning UK to produce a Literature review on Pedagogic approaches to using technology for learning. As Wikipedia explains, Lifelong Learning UK (LLUK) was one of the independent, Sector Skills Councils (SSCs) for UK employers in the lifelong learning sector. It was responsible for the professional development of all those working in community learning and development, further education, higher education, libraries, archives and information services, and work based learning across the UK.

As has arisen in the recent debate over the new Labour Government's establishment of SkillsUK, there is a long tradition in UK education of setting up new agencies and organizations, only to abandon them after a few years. Anyway just after our report was published LLUK were shut down. Even so, the report continued to be accessible from ResearchGate and other sites and has over the years gained a respectable number of downloads.

In the last couple of days I have been playing with the Google Labs experimental NotebookLM. I am normally rather under impressed by AI applications built for education but I can see a lot of use for this. It is backed by the Gemini Large Language Model but the Notebooks are based, they say only on data sources which you provide. And then there are different tools which can be used to support your research.

The Literature Review on Pedagogic approaches was a fairly dense piece of work and seemed to me a good example for playing with. I uploaded the 80 page report as a PDF and it gave me teh follwing summary.

This source is a literature review entitled "Pedagogic approaches to using technology for learning", which explores the evolving landscape of technology-enhanced learning. The authors, Graham Attwell and Jenny Hughes, argue that the proliferation of new technologies and internet tools is fundamentally changing the way we learn. They identify key themes driving this change, including the increasing role of digital literacies, the shifting expectations of learners, and the need for new pedagogic models. The review delves into specific pedagogic theories and their implications for using technology, highlighting frameworks like constructionism, activity theory, communities of practice, and Vygotsky's social constructivism.

The document examines the impact of technology on pedagogy in practice, exploring the rhetoric-reality gap and highlighting challenges such as the mismatch between students' digital skills outside school and their limited use within the formal learning environment. They discuss the emergence of personal learning environments (PLEs) as a potential solution to this challenge, and analyse the impact of emerging technologies like Web 3.0 and mobile devices on learning. The authors also examine the role of teacher training and professional development in adapting to this changing technological landscape, highlighting the critical need for training that focuses on pedagogic approaches rather than just technical skills. The review concludes by identifying gaps in existing research, underscoring the need for further investigation into the impact of technology on student performance, the development of evaluation tools specific to e-learning, and the exploration of culturally-specific influences on ICT integration.

Pretty smart although you wold expect an advanced Generative LLM to be able to do that. But you can also ask questions and it suggests examples - for instance "Describe the evolution of digital literacy understandings as outlined in the text." I just uploaded the one report but it could get much more interesting with multiple documents.

Anyway on to the killer feature which may turn out just to be a gimmick. You can ask it to produce a podacst in which two different 'people' discuss your work. And I love it. It has some of the baest explanations of Activity theory, constructionism and Vygotsky's theory of the Zone Proximal Development I have ever heard. Anyway do listen. Although ten years old, I think the pedagogic approaches outlined in chis paper stand the test of time - even more I think they are highly relevant for the debate over AI and the podcast makes the work far more approachable. But if you dco what the original report it is downloadable here.

AI: education and learning are not the same thing

Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

As the debate rolls on about the use of AI in education,we seem stuck on previous paradigms abut how technology can be used to support the existing education system rather than thing about AI and learning. Bill Gates said last week "The dream that you could have a tutor who’s always available to you, and understands how to motivate you, what your level of knowledge is, this software should give us that. When you’re outside the classroom, that personal tutor is helping you out, encouraging you, and then your teacher, you know, talks to the personal tutor." This can be seen in the release of Apps designed to make the system run more efficiently and support teachers in producing lesson plans, reduce administration etc. And for learners a swath of tutor apps and agents to help navigate the way through to support skills and knowledge development.

But writing about the popular educational exercise of future forecasting in the European Journal of Education in 2022, Neil Selwyn outlined five broad areas of contention that merit closer attention in future discussion and decision-making. These include, he said:

(1) "taking care to focus on issues relating to 'actually existing' AI rather than the overselling of speculative AI technologies;

(2) clearly foregrounding the limitations of AI in terms of modelling social contexts, and simulating human intelligence, reckoning, autonomy and emotions;

(3) foregrounding the social harms associated with AI use;

(4) acknowledging the value-driven nature of claims around AI; and

(5) paying closer attention to the environmental and ecological sustainability of continued AI development and implementation."

In a recent presentation, Rethinking Education, rather than predicting the future of technology in education, Ilkka Tuomi reconsiders the purpose of AI in education which he says "changes knowledge production and use. This has implications for education, research, innovation, politics, and culture. Current educational institutions are answers to industrial-age historical needs."

EdTech he says, has conflated education and learning but they are not the same thing. He quotes Biesta(2015 who said "education is not designed so that children and young people might learn –people can learn anywhere and do not really need education for it –but so that they might learn particular things, for particular reasons, supported by particular (educational) relationships.” (Biesta, 2015)

He goes on to quote Arendt (2061) who said “Normally the child is first introduced to the world in school. Now school is by no means the world and must not pretend to be; it is rather the institution that we interpose between the private domain of home and the world in order to make the transition from the family to the world possible at all. Attendance there is required not by the family but by the state, that is by the public world, and so, in relation to the child, school in a sense represents the world, although it is not yet actually the world.”

Education 4.0 he says is supposedly about “Preparing children for the demands of the future. "Education becomes a skill-production machine." Yet "Skills are typically reflections of existing technology that is used in productive practice and "Skills change when technology changes." Tumomi notes "There are now 13 393 skills listed in the European Skills, Competences, and Occupations taxonomy."

Digital skills are special, he says "because the computer is a multi-purpose tool" and "AI skills are even more special, because they interact with human cognition."

Social and emotional “skills” rank-order people“. "'21st century skills' are strongly linked to human personality, which, by definition, is stable across the life-span and People can be sorted based on, e.g., “openness to experience,” “conscientiousness,” “agreeableness,” “verbal ability,” “complex problem-solving skills,” etc."

Their position is these list doesn’t change in education and "Instead, training and technology potentially increase existing differences.|"

Tuomi draws attention to the the three social functions of education:

  • "Enculturation: Becoming a member of the adult world, community of practice, or thought community
  • Development of human agency: Becoming a competent participant in social and material worlds with the capability to transform them
  • Reproduction of social structures: Maintaining social continuity; social stratification through qualification and social filtering'
  • AI in education supports Enculturation through:
  • "AI for knowledge transfer and mastery
  • Development of human agency
  • AI for augmentation of agency
  • Reproduction of social structures
  • AI for prediction and classification (drop-out / at-risk, high-stakes assessment)Incentives and motives in HE."

But while "Students used to be proud to be on their way into becoming respected experts and professionals in the society which For many families, this required sacrifice they are now facing LLMs that know everything." Why, he asks "should you waste your time in becoming an expert in a world, where the answers and explanations become near zero-cost commodities?" What happens to HE, he ask, "when AI erodes the epistemic function of education? The traditional focus of AI&ED in accelerating learning and achieving mastery of specific knowledge topics is not sustainable"

His proposal is that "The only sustainable advantage for primary and secondary education, will be a focus on the development of human agency. Agency is augmented by technology. Agency is culturally embedded and relies on social collaboration and coordination. Affect and emotion are important and the epistemic function will be increasingly seen from the point of view of cognitive development (not knowledge acquisition). Qualification has already lost importance as the network makes history visible. It still is important for social stratification (in many countries)."

He concludes by reiterating that "Education is a social institution. It should not be conflated with 'learning'. AI vendors typically reinterpret education as learning. Education becomes “personalized” and “individualized,” and the objective changes to fast acquisition of economically useful skills and knowledge. The vendors are looking for education under the lamp-post, but this lamp-post is something they themselves have set up. Very little to do with education."

AI in ED: Equity, XAI and learner agency

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The AI in education theme continues to gather momentum, resulting in a non stop stream of journal; articles, reports, newsletters. blogs and videos. However, while not diminishing, there seem to be some subtle change in directions in the messages.

Firstly, despite many schools wary of Generative AI, there is a growing realisation that students are going to use it anyway and that the various apps claiming to check student work for AI simply don't work.

At the same time, there is an increasing focus on AI and pedagogy (perhaps linked to the increasing sophistication of Frontier Models from Gen AI but also the realisation that gimmicks like talking to an AI pretending to be someone famous from the past are just lame!). This increased focus on pedagogy is also leading to pressure to involve students. in the application of Gen AI for teaching and learning. And at recent students two ethical questions have emerged. The first is unequal access to AI applications and tools. Inside Higher Ed reports that recent research from the Public Policy Institute of California on disparate access to digital devices and the internet for K-12 students in the nation’s largest state public-school system. Put simply, they say. students who are already at an educational and digital disadvantage because of family income and first-generation constraints are becoming even more so every day as their peers embrace AI at high rates as a productivity tool—and they do not.

And while some tools will remain free, it appears that the most powerful and modern tools will increasingly come at a cost. The U.K. Jisc recently reported that access to a full suite of the most popular generative AI tools and education plug-ins currently available could cost about £1,000 (about $1,275) per year. For many students already accumulating student debt and managing the rising cost of living, paying more than $100 per month for competitive AI tools is simply not viable.

A second issue is motivation and agency for students in using AI tools. It may be that the rush to gamification, inspired by Apps like DuoLingo, is running thin. Perhaps a more subtle and sustained approach is needed to motivate learners. That may increase a focus on learner agency which in turn is being seen as linked to Explainable AI (or XAI for short). Research around Learning Analytics has pointed to the importance of students understanding the use purpose of LA but also being able to understand why the Analytics turns out as it does. And research into Personal Learning Environments has long shown the importance of learner agency in developing meta-cognition in learning. With the development of many applications for personalized learning programmes, it becomes important that learners are able to understand the reasons for their own individual learning pathways and if necessary challenge them.

While earlier debates about AI in Ed ethics, largely focused on technologies, the new debates are more focused on practices in teaching and learning using AI.