The danger of Lock Ins

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Analog Lecture on Computing / CC-BY 4.0

One fear from researcher in educational technology and AI is lock in. It happened before. Companies compete in giving a good deal for applications and services but lack of interoperability leaves educational organisation stuck if they want to get out or change providers. It was big news at one time with Learning Management Systems (LMS) but slowly the movement towards standards largely overcame that issue. But with the big tech AI companies still searching for convincing real world use cases and turning their eyes on education it seems it may be happening again.

OpenAI have said it will roll out an education-specific version of its chatbot to about 500,000 students and faculty at California State University as it looks to expand its user base in the academic sector and counter competition from rivals like Alphabet. The rollout will cover 23 campuses of the largest public university system in the United States, enabling students to access personalized tutoring and study guides through the chatbot, while the faculty will be able to use it for administrative tasks.

Rival Alphabet (that’s Google to you and me) has already been expanding into the education sector, where it has announced a $120 million investment fund for AI education programs and plans to introduce its GenAI chatbot Gemini to teen students' school-issued Google accounts."

And of course there is Microsoft who have been using sweetheart deals to their Office suite and email services to education providers effectively locking them in the Microsoft world including Microsoft’s AI.

About the Image

This surrealist collage is a visual narrative on education about AI. The juxtaposition of historical and contemporary images underscores the tension between established institutions of learning and the evolving, boundary-pushing nature of AI. The oversized keyboard, with the “A” and “I” keys highlighted in red, serves as a focal point, symbolising the dominance of AI in contemporary discourse, while the vintage image of the woman in historical attire kneeling at the outdated keyboard symbolises a reclamation of voices historically marginalised in technological innovation, drawing attention to the need for diverse perspectives in educating students about future of AI. Visually reimagining the classroom dynamic critiques the historical gatekeeping of AI knowledge and calls for an educational paradigm that values and amplifies diverse contributions.

Does generative AI lead to decreased critical thinking?

Elise Racine & The Bigger Picture / Better Images of AI / Glitch Binary Abyss I / CC-BY 4.0

As I have noted before LinkedIn has emerged as the clearing house for exchanging research and commentary of AI in education. And in this forum, the AI skeptics seem to be winning. Of course the doubts have always been there: hallucinations, bias. lack of agency, impact on creativity and so on. There are also increasing concerns over the environmental impact of Large Language Models. But the big one is the emerging research into the effectiveness of Generative AI for learning.

This week a new study from Microsoft and Carnegie Mellon University found that increased reliance on GenAI in the workplace leads to decreased critical thinking.

The study surveyed 319 knowledge workers and found that higher trust in AI correlates with reduced critical analysis, evaluation, and reasoned judgment. This pattern is seen as particularly concerning because these essential cognitive abilities - once diminished through lack of regular use, are difficult to restore.

The report says:

Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

Generative AI is being sold in the workplace as boosting productivity (and thus profits) through speeding up work. But as AI tools become more capable and trusted, it is being suggested that humans may be unconsciously trading their deep cognitive capabilities for convenience and speed.

About the Image

Giant QR code-like patterns dominate the cityscapes, blending seamlessly with the architecture to suggest that algorithmic systems have become intrinsic to the very fabric of urban life. Towering buildings and the street are covered in these black-and-white codes, reflecting how even the most basic aspects of everyday life— where we walk, work, and live — are monitored. The stark black-and-white aesthetic not only underscores the binary nature of these systems but also hints at what may and may not be encoded and, therefore, lost—such as the nuanced “color” and complexity of our world. Ultimately, the piece invites viewers to consider the pervasive nature of AI-powered surveillance systems, how such technologies have come to define public spaces, and whether there is room for the “human” element. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Does generative AI lead to decreased critical thinking?

Elise Racine & The Bigger Picture / Better Images of AI / Glitch Binary Abyss I / CC-BY 4.0

As I have noted before LinkedIn has emerged as the clearing house for exchanging research and commentary of AI in education. And in this forum, the AI skeptics seem to be winning. Of course the doubts have always been there: hallucinations, bias. lack of agency, impact on creativity and so on. There are also increasing concerns over the environmental impact of Large Language Models. But the big one is the emerging research into the effectiveness of Generative AI for learning.

This week a new study from Microsoft and Carnegie Mellon University found that increased reliance on GenAI in the workplace leads to decreased critical thinking.

The study surveyed 319 knowledge workers and found that higher trust in AI correlates with reduced critical analysis, evaluation, and reasoned judgment. This pattern is seen as particularly concerning because these essential cognitive abilities - once diminished through lack of regular use, are difficult to restore.

The report says:

Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

Generative AI is being sold in the workplace as boosting productivity (and thus profits) through speeding up work. But as AI tools become more capable and trusted, it is being suggested that humans may be unconsciously trading their deep cognitive capabilities for convenience and speed.

About the Image

Giant QR code-like patterns dominate the cityscapes, blending seamlessly with the architecture to suggest that algorithmic systems have become intrinsic to the very fabric of urban life. Towering buildings and the street are covered in these black-and-white codes, reflecting how even the most basic aspects of everyday life— where we walk, work, and live — are monitored. The stark black-and-white aesthetic not only underscores the binary nature of these systems but also hints at what may and may not be encoded and, therefore, lost—such as the nuanced “color” and complexity of our world. Ultimately, the piece invites viewers to consider the pervasive nature of AI-powered surveillance systems, how such technologies have come to define public spaces, and whether there is room for the “human” element. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Adoption and impact

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Turning Threads of Cognition / CC-BY 4.0

When we talk about education and training we tend to focus on teachers and trainers in vocational schools. But there is a whole other sector known as L&D - Learning and Development. According to the Association for Talent Development, the term learning and development,

encompasses any professional development a business provides to its employees. It is considered to be a core area of human resources management, and may sometimes be referred to as training and development, learning and performance, or talent development (TD).

Phil Hardman is a researcher and L&D professional. Her weekly newsletter is in interesting because of her focus on pedagogy and AI. And in last weeks edition she looked at development with in L&D in 2024 and asked how we might progress from adoption of to impact with AI in L&D in 2025. It seems to me here analysis accurately portrays where we wre in the use of vocational education and training.

First she draws attention to her finding that in L&D teams, I- around 70% were using tools like ChatGPT, Claude, and Co-Pilot, but were not telling their management.

They were using it mostly for functional tasks:

  • Writing emails
  • Summarising content
  • Creating module titles
  • Basic creative tasks like generating activity ideas

But by late 2023, she says, aL&D users gained confidence and began using generic AI tools to tackle more specialised tasks like:

  • Needs analyses
  • Writing learning objectives
  • Designing course outlines
  • Creating instructional scripts
  • Strategic planning

But Phil Hardman says in her view the shift toward using AI for more specialised L&D tasks revealed a dangerous pattern in reduced quality. “While AI tools like ChatGPT improved performance on functional tasks requiring little domain knowledge (like content summarisation and email writing), they actually decreased performance quality by 19% for more complex, domain-specific tasks that were poorly represented in AI's training data.”

This she calls "the illusion of impact where L&D professionals speed up their workflows and feel more confident about their AI-assisted work, but in practice produce lower quality outputs than they would if they didn’t use AI.”

In explaining the reasons for this she draws attention to “Permission Without Direction.  While organisations granted permission to use AI tools, they provided little strategic direction on how to leverage them effectively.”

She goes on to say “L&D is a highly specialised function requiring specific domain knowledge and skills. Generic AI tools, while powerful, were not optimised for specialised L&D tasks like needs analyses, goal definition, and instructional design decision-making.”

She concludes that “The massive adoption of AI in L&D creates an unprecedented opportunity, but realising its potential requires a fundamental shift in how we think about and implement technology as an industry.”

I wonder if we are facing the same dangers in vocational education and training. It is notable that AI seems to be viewed as a good thing in supporting increased efficiency for teachers, but far less attention is being paid to whether generalised AI tools are leading to better and more effective learning. And equally although most vocational schools are allowing teachers to use AI, there still appears a lack of strategic approaches to its adoption.

About the Image

Turning Threads of Cognition' is a digital collage highlighting the historical overlay of computing, psychology, and biology that underpin artificial intelligence. The vintage illustration of a head (an ode to Rosalind Franklin, a British chemist who discovered the double helix of DNA) is mapped with labeled sections akin to a phrenology chart. The diagram of the neutral network launches the image into the modern day as AI seeks to classify and codify sentiments and personalities based on network science. The phrenology chart highlights the subjectivity embedded in AI’s attempts to classify and predict behaviors based on assumed traits. The background of the Turing Machine and the two anonymous hands pulling on strings of the “neural network,” are an ode to the women of Newnham College at Cambridge University who made the code-breaking decryption during World War II possible. Taken together, the collage symbolizes the layers of intertwined disciplines, hidden labor, embedded subjectivity, and material underpinnings of today’s AI technologies.

AI and the future of jobs: An update

Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

One feature of the ongoing debates around Generative AI is that almost everything seems to be contested. While the big tech companies are ever bullish about the prospects for their new applications, controversy continues about the wider societal impact of these tools, including on education and employment.

Despite the initial concerns of the impact of Generative AI on employment, it seemed that fears were overblown although this may now be changing. Even so replacement of staff by AI may depend not just on sectors and occupations but all on the organisation and size of companies. Of course the motivation of companies to invest in AI is to increase profits. And it may be that the scale of organisational and work flow change required to introduce more AI has led to smaller companies holding back, was indeed with the ongoing doubts about the reliability of Generative AI applications. However there are signs of increasing use of AI in the software industry, albeit for boosting the speed to developing code, leading to higher productivity, and with more aggressive companies like Meta’s CEO Zuckerberg saying AI will replace mid-level engineers at Facebook, Instagram, and WhatsApp by 2025. Zuckerberg recently said that Meta and other tech companies are working on developing AI systems that are able to do complex coding with minimum human interactions. There is little doubt that creative jobs in the media film and advertising industries are coming under pressure with the increasing adoption of AI. The World Economic Forum (WEF) recently released its Future of Jobs Report 2025, including the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report also finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030. Of course the key word here is “could”.

There are two ned developments which are worrying for future jobs. The first is AI agents which are the latest products from the big tech industry. These are designed to split up work tasks and undertake the tasks semi autonomously. But for all the hype t remains to be seen how effective such agents might be. And the second is the increasingly use of AI for training robots. Robots have previously been difficult and expensive to train. AI may substantially reduce the cost of training leading to a new wave of automation in many industries.

But all this is speculations and finding reliable research remains a challenge. From an education and training perspective it seems to point to the importance of AI literacy *as an extension of digital literacy) and the need to ramp up continuing training for employees whose work is changing as a result of AI. Interestingly the WEF report found that 77 percent of surveyed firms will launch retraining programs to help current workers collaborate with AI systems between 2025 and 2030.

About the Image

'Web of Influence I' is part of the artist's series, 'The Bigger Picture': exploring themes of digital doubles, surveillance, omnipresence, ubiquity, and interconnectedness. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.