Does generative AI lead to decreased critical thinking?

Elise Racine & The Bigger Picture / Better Images of AI / Glitch Binary Abyss I / CC-BY 4.0

As I have noted before LinkedIn has emerged as the clearing house for exchanging research and commentary of AI in education. And in this forum, the AI skeptics seem to be winning. Of course the doubts have always been there: hallucinations, bias. lack of agency, impact on creativity and so on. There are also increasing concerns over the environmental impact of Large Language Models. But the big one is the emerging research into the effectiveness of Generative AI for learning.

This week a new study from Microsoft and Carnegie Mellon University found that increased reliance on GenAI in the workplace leads to decreased critical thinking.

The study surveyed 319 knowledge workers and found that higher trust in AI correlates with reduced critical analysis, evaluation, and reasoned judgment. This pattern is seen as particularly concerning because these essential cognitive abilities - once diminished through lack of regular use, are difficult to restore.

The report says:

Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

Generative AI is being sold in the workplace as boosting productivity (and thus profits) through speeding up work. But as AI tools become more capable and trusted, it is being suggested that humans may be unconsciously trading their deep cognitive capabilities for convenience and speed.

About the Image

Giant QR code-like patterns dominate the cityscapes, blending seamlessly with the architecture to suggest that algorithmic systems have become intrinsic to the very fabric of urban life. Towering buildings and the street are covered in these black-and-white codes, reflecting how even the most basic aspects of everyday life— where we walk, work, and live — are monitored. The stark black-and-white aesthetic not only underscores the binary nature of these systems but also hints at what may and may not be encoded and, therefore, lost—such as the nuanced “color” and complexity of our world. Ultimately, the piece invites viewers to consider the pervasive nature of AI-powered surveillance systems, how such technologies have come to define public spaces, and whether there is room for the “human” element. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Does generative AI lead to decreased critical thinking?

Elise Racine & The Bigger Picture / Better Images of AI / Glitch Binary Abyss I / CC-BY 4.0

As I have noted before LinkedIn has emerged as the clearing house for exchanging research and commentary of AI in education. And in this forum, the AI skeptics seem to be winning. Of course the doubts have always been there: hallucinations, bias. lack of agency, impact on creativity and so on. There are also increasing concerns over the environmental impact of Large Language Models. But the big one is the emerging research into the effectiveness of Generative AI for learning.

This week a new study from Microsoft and Carnegie Mellon University found that increased reliance on GenAI in the workplace leads to decreased critical thinking.

The study surveyed 319 knowledge workers and found that higher trust in AI correlates with reduced critical analysis, evaluation, and reasoned judgment. This pattern is seen as particularly concerning because these essential cognitive abilities - once diminished through lack of regular use, are difficult to restore.

The report says:

Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

Generative AI is being sold in the workplace as boosting productivity (and thus profits) through speeding up work. But as AI tools become more capable and trusted, it is being suggested that humans may be unconsciously trading their deep cognitive capabilities for convenience and speed.

About the Image

Giant QR code-like patterns dominate the cityscapes, blending seamlessly with the architecture to suggest that algorithmic systems have become intrinsic to the very fabric of urban life. Towering buildings and the street are covered in these black-and-white codes, reflecting how even the most basic aspects of everyday life— where we walk, work, and live — are monitored. The stark black-and-white aesthetic not only underscores the binary nature of these systems but also hints at what may and may not be encoded and, therefore, lost—such as the nuanced “color” and complexity of our world. Ultimately, the piece invites viewers to consider the pervasive nature of AI-powered surveillance systems, how such technologies have come to define public spaces, and whether there is room for the “human” element. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Adoption and impact

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Turning Threads of Cognition / CC-BY 4.0

When we talk about education and training we tend to focus on teachers and trainers in vocational schools. But there is a whole other sector known as L&D - Learning and Development. According to the Association for Talent Development, the term learning and development,

encompasses any professional development a business provides to its employees. It is considered to be a core area of human resources management, and may sometimes be referred to as training and development, learning and performance, or talent development (TD).

Phil Hardman is a researcher and L&D professional. Her weekly newsletter is in interesting because of her focus on pedagogy and AI. And in last weeks edition she looked at development with in L&D in 2024 and asked how we might progress from adoption of to impact with AI in L&D in 2025. It seems to me here analysis accurately portrays where we wre in the use of vocational education and training.

First she draws attention to her finding that in L&D teams, I- around 70% were using tools like ChatGPT, Claude, and Co-Pilot, but were not telling their management.

They were using it mostly for functional tasks:

  • Writing emails
  • Summarising content
  • Creating module titles
  • Basic creative tasks like generating activity ideas

But by late 2023, she says, aL&D users gained confidence and began using generic AI tools to tackle more specialised tasks like:

  • Needs analyses
  • Writing learning objectives
  • Designing course outlines
  • Creating instructional scripts
  • Strategic planning

But Phil Hardman says in her view the shift toward using AI for more specialised L&D tasks revealed a dangerous pattern in reduced quality. “While AI tools like ChatGPT improved performance on functional tasks requiring little domain knowledge (like content summarisation and email writing), they actually decreased performance quality by 19% for more complex, domain-specific tasks that were poorly represented in AI's training data.”

This she calls "the illusion of impact where L&D professionals speed up their workflows and feel more confident about their AI-assisted work, but in practice produce lower quality outputs than they would if they didn’t use AI.”

In explaining the reasons for this she draws attention to “Permission Without Direction.  While organisations granted permission to use AI tools, they provided little strategic direction on how to leverage them effectively.”

She goes on to say “L&D is a highly specialised function requiring specific domain knowledge and skills. Generic AI tools, while powerful, were not optimised for specialised L&D tasks like needs analyses, goal definition, and instructional design decision-making.”

She concludes that “The massive adoption of AI in L&D creates an unprecedented opportunity, but realising its potential requires a fundamental shift in how we think about and implement technology as an industry.”

I wonder if we are facing the same dangers in vocational education and training. It is notable that AI seems to be viewed as a good thing in supporting increased efficiency for teachers, but far less attention is being paid to whether generalised AI tools are leading to better and more effective learning. And equally although most vocational schools are allowing teachers to use AI, there still appears a lack of strategic approaches to its adoption.

About the Image

Turning Threads of Cognition' is a digital collage highlighting the historical overlay of computing, psychology, and biology that underpin artificial intelligence. The vintage illustration of a head (an ode to Rosalind Franklin, a British chemist who discovered the double helix of DNA) is mapped with labeled sections akin to a phrenology chart. The diagram of the neutral network launches the image into the modern day as AI seeks to classify and codify sentiments and personalities based on network science. The phrenology chart highlights the subjectivity embedded in AI’s attempts to classify and predict behaviors based on assumed traits. The background of the Turing Machine and the two anonymous hands pulling on strings of the “neural network,” are an ode to the women of Newnham College at Cambridge University who made the code-breaking decryption during World War II possible. Taken together, the collage symbolizes the layers of intertwined disciplines, hidden labor, embedded subjectivity, and material underpinnings of today’s AI technologies.

AI, Learning and Pedagogy

Yutong Liu / Better Images of AI / Joining the Table / CC-BY 4.0

In the latest edition of Dr Phil's newsletter, entitled 'The Impact of Gen AI on Human Learning: a research summary' Phil Hardman undertakes a literature review of the most recent and important peer-reviewed studies.

And in contrast to some of the studies currently coming out, which tend to claim either amazing success or doom laden failure for the use of AI for learning, she adopts an analytical and nuanced viewpoint, examining the evidence and providing a list of key takeaways from each report, leading to implications for educators and developers.

Here are the Key takeaways from each of the five studies.

  1. Surface-Level Gains: Generative AI tools like ChatGPT improve task-specific outcomes and engagement but have limited impact on deeper learning, such as critical thinking and analysis.
  2. Emotional Engagement: While students feel more motivated when using ChatGPT, this does not always translate into better long-term knowledge retention or deeper understanding.
  1. Over-reliance on AI tools hinders foundational learning, especially for beginners.
  2. Advanced learners can better leverage AI tools to enhance skill acquisition.
  3. Using LLMs for explanations (rather than debugging or code generation) appears less detrimental to learning outcomes.
  1. Scaffolding Through Customisation: Iterative feedback and tailored exercises significantly enhance learning outcomes and long-term retention.
  2. Generic AI Risks Dependency: Relying on AI for direct solutions undermines critical problem-solving skills necessary for independent learning.
  1. Offloading Reduces Cognitive Engagement: Delegating tasks to AI tools frees cognitive resources but risks diminishing engagement in complex and analytical thinking.
  2. Age and Experience Mitigate AI Dependence: Older, more experienced users exhibit stronger critical thinking skills and are less affected by cognitive offloading.
  3. Trust Drives Offloading: Increased trust in AI tools encourages over-reliance, further reducing cognitive engagement and critical thinking.
  1. Confidence ≠ Competence: Generative AI fosters overconfidence but fails to build deeper knowledge or skills, potentially leading to long-term stagnation.
  2. Reflection and SRL Are Crucial: Scaffolding and guided SRL strategies are needed to counteract the tendency of AI tools to replace active learning.

As Phil Hardman says in the introduction to her article:

At the same time as the use of generic AI for learning proliferates, more and more researchers raise concerns about about the impact of AI on human learning. The TLDR is that more and more research suggests that generic AI models are not only suboptimal for for human learning — they may actually have an actively detrimental effect on the development of knowledge and skills.

However she remains convinced of "the potential of AI t:o transform education remains huge if we shift toward structured and pedagogically optimised systems."

To unlock AI’s transformative potential, she says, "we must prioritise learning processes over efficiency and outputs. This requires rethinking AI tools through a pedagogy-first lens, with a focus on fostering deeper learning and critical thinking."

She provides the following examples:

  • Scaffolding and Guidance: AI tools should guide users through problem-solving rather than providing direct answers. A math tutor, for instance, could ask, “What formula do you think applies here, and why?” before offering hints.
  • Reflection and Metacognition: Tools should prompt users to critique their reasoning or reflect on challenges encountered during tasks, encouraging self-regulated learning.
  • Critical Thinking Challenges: AI systems could engage learners with evaluative questions, such as “What might be missing from this summary?”

Its well worth reading the full article. Phil Hardman seems to be of the few writing about AI from a pedagogic starting point.

About the Image

This illustration draws inspiration from Leonardo da Vinci’s masterpiece The Last Supper. It depicts a grand discussion about AI. Instead of the twelve apostles, I replaced them with the twelve Chinese zodiac animals. In Chinese culture, each zodiac symbolizes distinct personality traits. Around the table, they discuss AI, each expressing their views with different attitudes, which you can observe through their facial expressions. The table is draped with a cloth symbolizing the passage of time, and it’s set with computer-related objects. On the wall behind them is a mural made of binary code. In the background, there’s an apple tree symbolizing wisdom, with its intertwining branches representing neural networks. The apples, as the fruits of wisdom, are not on the tree but stem from the discussions of the twelve zodiacs. Behind the tree is a Windows 98 System window, opening to the outside world. Through this piece, I explore the history of AI and computer development. Using the twelve zodiacs, I emphasize the diversity of voices in this conversation. I hope more people will join in shaping the diverse narratives of AI history in the future.

Alternative AI Futures for Lifelong Learning

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Lovelace GPU / CC-BY 4.0

Last Friday, 24 January, marked the UNESCO International Day of Education. And as part of that, the UNESCO Institute for Lifelong Learning hosted a webinar on ‘Lifelong learning in the age of AI’ aiming “to bring together policymakers, practitioners, and researchers to revisit the idea of lifelong learning in the age of emerging technologies, with a thematic focus on lifelong learning as a concept, workplace learning, digital competencies of adult educators, and bridging the grey digital divide.”

Current policies on AI and lifelong learning, they said, often adopt an instrumental and technologically deterministic approach, prioritising efficiency over human development and agency. UNESCO is committed to supporting Member States to harness the potential of AI technologies for achieving the promise of lifelong learning opportunities for all, while ensuring that its application in the learning contexts is guided by the core principles of inclusion and equity.”

The webinar would discuss current trends, in policy, research, and innovative practices in emerging technologies such as AI and its relation to lifelong learning and the concept of agency. 

One of the speakers was Rebecca Eynon, Professor of Education, the Internet and Society, at the University of Oxford with a presentation entitled 'Reconfiguring Lifelong Learning in the Age of AI: Insights from policy and research'. In many ways her presentation was prescient, coming as it did two days before the news of the DeepSeek Open Source model broke.

Rebecca began by questioning what is Al in lifelong learning? Is it an approach or an academic methodology? Motivations of engagement are about researching and facilitating learning (and are often more about knowledge acquisition and are psychological in focus), while remaining cautious about the current hype around Al in Education. They also encompass relations between Al and humans while working with Al. Al is assumed to contribute to increased efficiency of humans and learning and Al is implemented and conceptualized as a peer or colleague. Al is viewed as part of a wider reconfiguration of humans and their contexts.

Artificial Intelligence is currently hailed as a 'solution' to perceived problems in education. Though few sociologists of education would agree with its deterministic claims, this Al solutionist thinking is gaining significant currency.

Rebecca went on to explain research using a relatively novel method for sociology - a knowledge graph - which together with Bourdieusean theory, she said, facilitated a critical examination of how and why different stakeholders in education, educational technology and policy are valorising Al. including their main concepts and motivation.

Drawing on this analysis, she argues that Al is currently being mobilised in education in problematic ways and advocates for more systematic sociological thinking and research to re-orientate the field to account for society's structural conditions. She pointed to the dominance of the commercial sector the prevalence of personalisation. The commercial sector is tending to dominate conversations about Al and education. But the commercial motivations are based on the needs of the market, and promotes an individual view of learning where economic agendas predominate.

There is, Rebecca said, almost an absence of Al policy and specific education actors may well intensify economic and individual notions of education. This has likely implications for what kinds of systems are designed for education . Although this points to an intensification of economic and individual notions of education this is not inevitable. Change is complex, and there is fragility in the ed tech market, with some signs of discontent with Al. She pointed to increasing calls for ethical and equitable AI,.

Rebecca concluded her presentation by pointing to the need to make visible and understand the networks around AI in education and the complex ecology to change them. She said we have to work as a community to demand alternative Al futures for Lifelong Learning.

About the Image

Through distortion, this image depicts a pixelized and reconfigured portrait of Ada Lovelace cast on a microchip. Ada Lovelace was an English mathematician who discovered that a computer could follow a sequence of instructions beyond pure calculation. Her contributions laid the groundwork for programmable computing that underpins algorithms driving AI advancement. As GPU (Graphics Processing Units) microchips are maximizing parallel computing for accelerating tasks like machine learning, the image blending Lovelace’s historical contributions with modern computational technology.