Teachers’ and Learners’ Agency and Generative AI

XK Studio & Google DeepMind / Better Images of AI / AI Lands / CC-BY 4.0

It is true that there is plenty being written about AI in education  - almost to the extent that it is the only thing being written about education. But as usual – few people are talking about Vocational Education and Training. And the discourse appears to almostdefault to a techno-determinist standpoint – whether by intention or not. Thus while reams are written on how to prompt Large Language Models little is being said about the pedagogy of AI. All technology applications favour and facilitate or hinder and block pedagogies whether hidden or not (Attwell G and Hughes J. 2010) .

I got into thinking more about this as a result of two strands of work I am doing presently – one for the EU Digital Education hub on the explicability of AI in education and the second work for the Council of Europe who are developing a Reference Framework for Democratic Culture in VET. I was also interested in a worry expressed by Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, that  machine-centrism is prevailing over human-centrism and machine agency undermining human agency.

Research undertaken into Personal Learning Environments (Buchem, I, Attwell, G. and Torres, R., 2011) and into the impact of online learning during the Covid 19 pandemic have pointed to the importance of agency for learning. For a fairer, usefully transparent and more responsible online environment. Virginia Portillo et Al (2024) say young people have “a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.”

The previous research into Personal Learning Environments suggests that agency is central to the development of Self Regulated Learning (SRL) which is important for Lifelong Learning and Vocational Education and Training. Self Regulated Learning is “the process whereby students activate and sustain cognition, behaviors, and affects, which are systematically oriented toward attainment of their goals" (Schunk & Zimmerman, 1994).   And SRL drives the “cognitive, metacognitive, and motivational strategies that learners employ to manage their learning (Panadero, 2017). “

Metacognitive strategies guide learners’ use of cognitive strategies to achieve their goals, including setting goals, monitoring learning progress, seeking help, and reflecting on whether the strategies used to meet the goal were useful (Pintrich, 2004; Zimmerman, 2008).

The introduction of generative AI in education raises important questions about learner agency. Agency refers here to the capacity of individuals to act independently and make their own free choices (Bandura, 2001). In the context of AI-enhanced learning, agency can be both supported and challenged in several ways. In a recent paper, ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’, Hidalgo, C. (2024) identifies three different approaches to agency related to AI for education. The first is how AI systems have been developed throughout their lifecycle to serve human agency. The second is human beings’ capacity to exert their rights by controlling the decision-making process in the interaction with AI. The third is that people should be able to understand AI’s impact on their lives and how to benefit from the best of what AI offers. Cesar Hildago says: “These three understandings entail different forms of responsibility for the actors involved in the design, development, and use of AI in education. Understanding the differences can guide lawmakers, research communities, and educational practitioners to identify the actors’ roles and responsibility to ensure student and teacher agency.”

Generative AI can provide personalized learning experiences tailored to individual students' needs, potentially enhancing their sense of agency by allowing them to progress at their own pace and focus on areas of personal interest. However, this personalization may also raise concerns about the AI system's influence on learning paths and decision-making processes. In a new book "Creative Applications of Artificial Intelligence in Education" Alex U. and Margarida Romero (Editors) explore creative applications of across various levels, from K-12 to higher education and professional training. The book addresses key topics such as preserving teacher and student agency, digital acculturation, citizenship in the AI era, and international initiatives supporting AI integration in education. The book also examines students' perspectives on AI use in education, affordances for AI-enhanced digital game-based learning, and the impact of generative AI in higher education.

To foster agency using Generative AI they propose the following:

1. Involve students in decision-making processes regarding AI implementation in their education.

2. Teach critical thinking skills to help students evaluate and question AI-generated content.

3. Encourage students to use AI as a tool for enhancing their creativity rather than replacing it.

4. Provide opportunities for students to customize their learning experiences using AI.

5. Maintain a balance between AI-assisted learning and traditional human-led instruction.

Agency is also strongly interlinked to motivation for learning. This will be the subject of a further blog post.

References

Alex U. and Margarida Romero (Editors) (2024) Creative Applications of Artificial Intelligence in Education, https://link.springer.com/book/10.1007/978-3-031-55272-4#keywords

Attwell G and Hughes J. (2010) Pedagogic approaches to using technology for learning: literature review, https://www.researchgate.net/publication/279510494_Pedagogic_approaches_to_using_technology_for_learning_literature_review

Bandura, A. (2001) Social Cognitive Theory of Mass Communication, Media Psychology}, volume 3, pp 265 - 299}, https://api.semanticscholar.org/CorpusID:35687430}

Buchem, I, Attwell, G. Torres R. (2011)  Understanding Personal Learning Environments: Literature review and synthesis through the Activity Theory lens, https://www.researchgate.net/publication/277729312_Understanding_Personal_Learning_Environments_Literature_review_and_synthesis_through_the_Activity_Theory_lens

Hidalgo, C. (2024), ‘Agency in AI and Education Policy: European Resolution Three on Harnessing the Potential for AI in and Through Education’ In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14830. Springer, Cham. https://doi.org/10.1007/978-3-031-64299-9_27

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2017.00422

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407.

Schunk, D. H., & Zimmerman, B. J. (1994). Self-regulation of learning and performance: Issues and educational applications. Lawrence Erlbaum Associates Inc. Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future

Who owns your data?

Photo by Markus Spiske on Unsplash

Arguments over what data should be allowed to be used for training Large Language Models rumble on. Ironically it is LinkedIn which hosts hundreds of discussion is AI which is the latest villain.

The platform updated its policies to clarify data collection practices, but this led to user backlash and increased scrutiny over privacy violations. The lack of transparency regarding data usage and the automatic enrollment of users in AI training has resulted in a significant loss of trust. Users have expressed feeling blindsided by LinkedIn's practices.

In response to user concerns, LinkedIn has committed to updating its user agreements and improving data practices. However, skepticism remains among users regarding the effectiveness of these measures. LinkedIn has provided users with the option to opt out of AI training features through account settings. However, this does not eliminate previously collected data, leaving users uneasy about data handling.

However, it is worth noting that accounts from Europe are not affected at present. It seems that LinkedIn would be breaking European laws if they were to try to do the same within the European Union.

More generally, the UK Open Data Institute says "there is very little transparency about the data used in AI systems - a fact that is causing growing concern as these systems are increasingly deployed with real-world consequences. Key transparency information about data sources, copyright, and inclusion of personal information and more is rarely included by systems flagged within the Partnership for AI’s AI Incidents Database.

While transparency cannot be considered a ‘silver bullet’ for addressing the ethical challenges associated with AI systems, or building trust, it is a prerequisite for informed decision-making and other forms of intervention like regulation."

AI and Ed: pitfalls but encouraging signs

Joahna Kuiper / Better Images of AI / Little data houses / CC-BY 4.0

In August I became hopeful that the hype around Generative AI was beginning to die down. Now I thought we might get a gap to do some serious research and thinking about the future role of AI in education. I was wrong! Come September and the outpourings on LinkedIn (though I can' really understand how such a boring social media site became the focus for these debates) grew daily. In part this may be because there has now been time for researchers to publish the results of projects actually using Gen AI, in part because the ethical issues continue to be of concern. But it may also be because of a flood of AI based applications for education are being launched almost every day. As Fengchun Miao, Chief, Unit for Technology and AI in Education at UNESCO, recently warned: "Big AI companies have been hiring chief education officers, publishing guidance for teachers, and etc. with an intention to promote hype and fictional claims on AI and to drag education and students into AI pitfalls."

He summarised five major AI pitfalls for education:

  1. Fictional hype on AI’s potentials in addressing real-world challenges
  2. Machine-centrism prevailing over human-centrism and machine agency undermining human agency
  3. Sidelining AI’s harmful impact on environment and ecosystems
  4. Covering up on the AI-driven wealth concentration and widened social inequality
  5. Downgrading AI competencies to operational skills bound to commercial AI platforms

UNESCO has published five guiding principles in their AI competency framework for students:
2.1 Fostering critical thinking on the proportionality of AI for real-world challenges
2.2 Prioritizing competencies for human-centred interaction with AI
2.3 Steering the design and use of more climate-friendly AI
2.4 Promoting inclusivity in AI competency development
2.5 Facilitating transferable AI foundations for lifelong learning

And the Council of Europe are looking at how Vocational education and Training can promote democracy (more on this to come later). At the same time the discussion on AI Literacy is gaining momentum. But in reality it is hard to see how there is going to be real progress in the use of AI for learning, while it remains the preserve of the big tech companies with their totally technocratic approach to education.

For the last year, I have been saying how the education sector needs to itself be leading developments in AI applications for learning, in a multi discipline approach bringing together technicians and scientists with teachers and educational technologists. And of course we need a better understanding of pedagogic approaches to the use of AI for learning, something largely missing from the AI tech industry. A major barrier to this has been the cost of developing Large Language Models or of deploying applications based on LLMs from the big tech companies.

That having been said there are some encouraging signs. From a technical point of view, there is a move towards small (and more accessible) language models, bench-marked near to the cutting edge models. Perhaps more importantly there is a growing understanding than the models can be far more limited in their training and be trained on high quality data for a specific application. And many of these models are being released as Open Source Software, and also there are Open Source datasets being released to train new language models. And there are some signs that the education community is itself beginning to develop applications.

AI Tutor Pro is a free app developed by Contact North | Contact Nord in Canada. They say the app enables students to:

  • Do so in almost any language of their choice
  • Learn anything, anytime, anywhere on mobile devices or computers
  • Engage in dynamic, open-ended conversations through interactive dialogue
  • Check their knowledge and skills on any topic 
  • Select introductory, intermediate and advanced levels, allowing them to grow their knowledge and skills on any topic.

And the English Department for Education has invited tenders to develop an App for Assessment, based on data that they will supply.

I find this encouraging. If you know of any applications developed with a major input from the education community, I'd like to know. Just use teh contact form on this website.

Pedagogical Approaches and Google LM Notebooks

Some ten or so years ago myself and Jenny Hughes were commissioned by Lifelong Learning UK to produce a Literature review on Pedagogic approaches to using technology for learning. As Wikipedia explains, Lifelong Learning UK (LLUK) was one of the independent, Sector Skills Councils (SSCs) for UK employers in the lifelong learning sector. It was responsible for the professional development of all those working in community learning and development, further education, higher education, libraries, archives and information services, and work based learning across the UK.

As has arisen in the recent debate over the new Labour Government's establishment of SkillsUK, there is a long tradition in UK education of setting up new agencies and organizations, only to abandon them after a few years. Anyway just after our report was published LLUK were shut down. Even so, the report continued to be accessible from ResearchGate and other sites and has over the years gained a respectable number of downloads.

In the last couple of days I have been playing with the Google Labs experimental NotebookLM. I am normally rather under impressed by AI applications built for education but I can see a lot of use for this. It is backed by the Gemini Large Language Model but the Notebooks are based, they say only on data sources which you provide. And then there are different tools which can be used to support your research.

The Literature Review on Pedagogic approaches was a fairly dense piece of work and seemed to me a good example for playing with. I uploaded the 80 page report as a PDF and it gave me teh follwing summary.

This source is a literature review entitled "Pedagogic approaches to using technology for learning", which explores the evolving landscape of technology-enhanced learning. The authors, Graham Attwell and Jenny Hughes, argue that the proliferation of new technologies and internet tools is fundamentally changing the way we learn. They identify key themes driving this change, including the increasing role of digital literacies, the shifting expectations of learners, and the need for new pedagogic models. The review delves into specific pedagogic theories and their implications for using technology, highlighting frameworks like constructionism, activity theory, communities of practice, and Vygotsky's social constructivism.

The document examines the impact of technology on pedagogy in practice, exploring the rhetoric-reality gap and highlighting challenges such as the mismatch between students' digital skills outside school and their limited use within the formal learning environment. They discuss the emergence of personal learning environments (PLEs) as a potential solution to this challenge, and analyse the impact of emerging technologies like Web 3.0 and mobile devices on learning. The authors also examine the role of teacher training and professional development in adapting to this changing technological landscape, highlighting the critical need for training that focuses on pedagogic approaches rather than just technical skills. The review concludes by identifying gaps in existing research, underscoring the need for further investigation into the impact of technology on student performance, the development of evaluation tools specific to e-learning, and the exploration of culturally-specific influences on ICT integration.

Pretty smart although you wold expect an advanced Generative LLM to be able to do that. But you can also ask questions and it suggests examples - for instance "Describe the evolution of digital literacy understandings as outlined in the text." I just uploaded the one report but it could get much more interesting with multiple documents.

Anyway on to the killer feature which may turn out just to be a gimmick. You can ask it to produce a podacst in which two different 'people' discuss your work. And I love it. It has some of the baest explanations of Activity theory, constructionism and Vygotsky's theory of the Zone Proximal Development I have ever heard. Anyway do listen. Although ten years old, I think the pedagogic approaches outlined in chis paper stand the test of time - even more I think they are highly relevant for the debate over AI and the podcast makes the work far more approachable. But if you dco what the original report it is downloadable here.

The AI Assessment Scale

I don't know quite how I have managed to miss this up to now. The AI Assessment Scale (AIAS) has been around for over a year. On the occasion of updating to the latest version - see illustration above, Leon Furze, a Consultant, author and PhD candidate and one of the authors, said in his blog:

The original AIAS and its subsequent formal version (published in JUTLP) represents a moment in time where educational institutions across the world were reaching for something to help with the immediate problems of AI, such as the perceived threat to academic integrity.

Jason Lodge at University of Queensland and TEQSA refers to these as the acute problems of AI, but we recognise the need for robust frameworks that also tackle the chronic problems brought on in some ways by how we approach ideas of assessment and academic integrity in education.

So we have reflected on all of the versions of the AIAS we have seen across the world in K-12 and higher education. We have sought out critique and engaged with diverse perspectives, from school teachers to students, university lecturers, to disability activists, experts in fields including assessment security, cognitive sciences, and pedagogy.

And over the past months, we have refined and invigorated the AI Assessment Scale to bring it up to speed with our current understandings of generative AI and learning.