AI, Learning and Pedagogy

Yutong Liu / Better Images of AI / Joining the Table / CC-BY 4.0

In the latest edition of Dr Phil's newsletter, entitled 'The Impact of Gen AI on Human Learning: a research summary' Phil Hardman undertakes a literature review of the most recent and important peer-reviewed studies.

And in contrast to some of the studies currently coming out, which tend to claim either amazing success or doom laden failure for the use of AI for learning, she adopts an analytical and nuanced viewpoint, examining the evidence and providing a list of key takeaways from each report, leading to implications for educators and developers.

Here are the Key takeaways from each of the five studies.

  1. Surface-Level Gains: Generative AI tools like ChatGPT improve task-specific outcomes and engagement but have limited impact on deeper learning, such as critical thinking and analysis.
  2. Emotional Engagement: While students feel more motivated when using ChatGPT, this does not always translate into better long-term knowledge retention or deeper understanding.
  1. Over-reliance on AI tools hinders foundational learning, especially for beginners.
  2. Advanced learners can better leverage AI tools to enhance skill acquisition.
  3. Using LLMs for explanations (rather than debugging or code generation) appears less detrimental to learning outcomes.
  1. Scaffolding Through Customisation: Iterative feedback and tailored exercises significantly enhance learning outcomes and long-term retention.
  2. Generic AI Risks Dependency: Relying on AI for direct solutions undermines critical problem-solving skills necessary for independent learning.
  1. Offloading Reduces Cognitive Engagement: Delegating tasks to AI tools frees cognitive resources but risks diminishing engagement in complex and analytical thinking.
  2. Age and Experience Mitigate AI Dependence: Older, more experienced users exhibit stronger critical thinking skills and are less affected by cognitive offloading.
  3. Trust Drives Offloading: Increased trust in AI tools encourages over-reliance, further reducing cognitive engagement and critical thinking.
  1. Confidence ≠ Competence: Generative AI fosters overconfidence but fails to build deeper knowledge or skills, potentially leading to long-term stagnation.
  2. Reflection and SRL Are Crucial: Scaffolding and guided SRL strategies are needed to counteract the tendency of AI tools to replace active learning.

As Phil Hardman says in the introduction to her article:

At the same time as the use of generic AI for learning proliferates, more and more researchers raise concerns about about the impact of AI on human learning. The TLDR is that more and more research suggests that generic AI models are not only suboptimal for for human learning — they may actually have an actively detrimental effect on the development of knowledge and skills.

However she remains convinced of "the potential of AI t:o transform education remains huge if we shift toward structured and pedagogically optimised systems."

To unlock AI’s transformative potential, she says, "we must prioritise learning processes over efficiency and outputs. This requires rethinking AI tools through a pedagogy-first lens, with a focus on fostering deeper learning and critical thinking."

She provides the following examples:

  • Scaffolding and Guidance: AI tools should guide users through problem-solving rather than providing direct answers. A math tutor, for instance, could ask, “What formula do you think applies here, and why?” before offering hints.
  • Reflection and Metacognition: Tools should prompt users to critique their reasoning or reflect on challenges encountered during tasks, encouraging self-regulated learning.
  • Critical Thinking Challenges: AI systems could engage learners with evaluative questions, such as “What might be missing from this summary?”

Its well worth reading the full article. Phil Hardman seems to be of the few writing about AI from a pedagogic starting point.

About the Image

This illustration draws inspiration from Leonardo da Vinci’s masterpiece The Last Supper. It depicts a grand discussion about AI. Instead of the twelve apostles, I replaced them with the twelve Chinese zodiac animals. In Chinese culture, each zodiac symbolizes distinct personality traits. Around the table, they discuss AI, each expressing their views with different attitudes, which you can observe through their facial expressions. The table is draped with a cloth symbolizing the passage of time, and it’s set with computer-related objects. On the wall behind them is a mural made of binary code. In the background, there’s an apple tree symbolizing wisdom, with its intertwining branches representing neural networks. The apples, as the fruits of wisdom, are not on the tree but stem from the discussions of the twelve zodiacs. Behind the tree is a Windows 98 System window, opening to the outside world. Through this piece, I explore the history of AI and computer development. Using the twelve zodiacs, I emphasize the diversity of voices in this conversation. I hope more people will join in shaping the diverse narratives of AI history in the future.

Digital Pedagogies Rewilded

Ed Dingli for Fine Acts

I've written a lot about AI and education over the last year. I've not written so much about AI and learning and I'm going to try to remedy this in the next year. I've been writing for the AI Pioneers project in which Pontydysgu is a partner. But of course AI pioneers is not the only project around AI funded under the European Erasmus+ project.

And I very mush like the HlP - Hacking Innovative Pedagogies: Digital Education Rewilded Erasmus+  project carried out by the University of Graz, Aalborg University and Dublin City University.

They quote Beskorsa et al. (2023) saying:

Hacking innovative pedagogy means using existing methods or tools, spicing them up with creativity and curiosity and then using them to find new, exciting, or out-of-the- box solutions. It fosters experimentation, exploration, collaboration, and the integration of technology to promote critical thinking, problem solving and other key 21st century skills.

The web site is beautifully designed and a lot of fun.

And on February 20 and 21 they are holding a symposium in Dublin. This is the description:

A symposium for thinking otherwise about critical AI and post-AI pedagogies of higher education as part of the Erasmus+ Hacking Innovative Pedagogies: Digital Learning Rewilded (opens in a new tab)project.

This symposium aims to bring educators, learners, and interested others together to see how we might co-design futures beyond the calculative and output-obsessed forms which GenAI could funnel us into if we are not careful. It seeks to explore ways of teaching and learning that are based on mutualism, that recognise teaching as distributed activity and that honour our deep imaginative capacities for good (Czerniewicz & Cronin, 2023). We need to craft critical, creative and ethical responses in community to help address the multitude of issues now posed to educational assessment, future jobs, the environment, biases and increases in cyber-crime and deepfakes.

Come and help us think together during this event so as to rewild our pedagogical thinking and futures dreaming (Beskorsa et al, 2023; Lyngdorf et al 2024). In the words of Dr. Ruha Benjamin, we invite you to “invoke stories and speculation as surrogates, playing and poetry as proxies, and myths, visions, and narratives all as riffs on the imagination” (Benjamin, 2024 p. ix).

The symposium is free to attend, in person or online.

How might AI support how people learn outside the classroom?

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Data Mining 3 / CC-BY 4.0

Every day hundreds of posts are written on social media about AI and education. Every day yet more papers are published about AI and education. Webinars, seminars and conferences about AI and education. Yet nearly all of them are about formal education, education in the classroom. But as Stephen Downes says in a commentary on a blog by Alan Levine we need more on how people actually teach and actually learn. "We get a lot in the literature about how it happens in the classroom. But the classroom is a very specialized environment, designed to deal with the need to foster a common set of knowledge and values on a large population despite constraints in staff and resources. But if we go out into homes or workplaces, we see teaching and learning happening all the time..."

And of course people learn in different ways - through being showed how to do something, through watching a video, through working, playing and talking. Sadly in all these discussions about AI and education there is little about how people learn and even less on how AI might support (or hinder) informal learning.

Social generative AI for education

Ariyana Ahmad & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0

I am very impressed with a paper, Towards social generative AI for education: theory, practices and ethics, by Mike Sharples. Here is a quick summary but I recommend to read the entire article.

In his paper, Mike Sharples explores the evolving landscape of generative AI in education by discussing different AI system approaches. He identifies several potential AI types that could transform learning interactions: generative AIs that act as possibility generators, argumentative opponents, design assistants, exploratory tools, and creative writing collaborators.

The research highlights that current AI systems primarily operate through individual prompt-response interactions. However, Sharples suggests the next significant advancement will be social generative AI capable of engaging in broader, more complex social interactions. This vision requires developing AI with sophisticated capabilities such as setting explicit goals, maintaining long-term memory, building persistent user models, reflecting on outputs, learning from mistakes, and explaining reasoning.

To achieve this, Sharples proposes developing hybrid AI systems that combine neural networks with symbolic AI technologies. These systems would need to integrate technical sophistication with ethical considerations, ensuring respectful engagement by giving learners control over their data and learning processes.

Importantly, the paper emphasizes that human teachers remain fundamental in this distributed system of human-AI interaction. They will continue to serve as conversation initiators, knowledge sources, and nurturing role models whose expertise and human touch cannot be replaced by technology.

The research raises critical philosophical questions about the future of learning: How can generative AI become a truly conversational learning tool? What ethical frameworks should guide these interactions? How do we design AI systems that can engage meaningfully while respecting human expertise?

Mike Sharples concludes by saying that designing new social AI systems for education requires more than fine tuning existing language models for educational purposes.

It requires building GenAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and experience with networked machine learning.

AI and Education: Agency, Motivation, Literacy and Democracy

Yutong Liu & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0

Graham Attwell, George Bekiaridis and Angela Karadog have written a new paper, AI and Education: Agency, Motivation, Literacy and Democracy. The paper has been published as a preprint for download on the Research Gate web site.

This is the abstract.

This paper, developed as part of the research being undertaken by the EU Erasmus+ AI Pioneers project, examines the use of generative AI in educational contexts through the lens of Activity Theory. It analyses how the integration of large language models and other AI-powered tools impacts learner agency, motivation, and AI literacy. The authors conducted a multi-pronged research approach including literature review, stakeholder interviews, social media monitoring, and participation in European initiatives on AI in education. The paper highlights key themes around agency, where AI can both support and challenge learner autonomy depending on how the technology is positioned and implemented. It explores the complex relationships between AI, personalization, co-creation, and scaffolding in fostering student agency. The analysis also examines the effects of generative AI on both intrinsic and extrinsic motivation for learning, noting both opportunities and potential pitfalls that require careful consideration by educators. Finally, the paper argues that developing critical AI literacy is essential, encompassing the ability to understand AI capabilities, recognize biases, and evaluate the ethical implications of AI-generated content. It suggests that a broader, more democratic approach to curriculum and learning in vocational education and training is necessary to empower students as active, informed citizens in an AI-driven future. The findings provide an approach to the complex interplay between generative AI, learner agency, motivation, and digital literacy in educational settings, particularly in the context of vocational education and adult learning.