These technologies are complex….

Nadia Piet and AIxDESIGN & Archival Images of AI / Better Images of AI / Limits of Classification / CC-BY 4.0

Theres some pretty fearsome discussions going on this week between so called sceptics of Gen AI and supporters (although much of the shouting is over the terms of the debate).

Bur it seems pretty incontestable that the big AI technology providers are trying to muscle in on education as a promising market.

In a series of posts on LinkedIn, Ben Williamson from Edinburgh University has looked at the different initiatives by the companies who not surprisingly are giving incentives to sign up with their AI variant. Google he said almost literally buying institutions, with prime ministerial endorsement, to advance its AI interests. Google last week announced the launch the AI Campus, with UK Prime Minister Sir Keir Starmer attending “to show his support for our groundbreaking initiative to improve digital skills in the UK in our London home and his constituency.” The pilot They said will offer students access to cutting-edge resources on AI and machine learning, as well as offering mentoring and industry expertise from Google, Google DeepMind, and others.:

Meanwhile not to be outdone Amazon's cloud computing subsidiary AWS - actually its biggest profit centre - announced a $100 million program to provide AI skills to underserved kids.

But as Williamson pointed out that $100m is pretty restricted coming in the form of cloud credits as part of the AWS Education Equity Initiative."

These cloud credits, they say, “essentially act like cash that organizations can use to offset the costs of using AWS's cloud services. Recipients can then take advantage of AWS's comprehensive portfolio of cloud technology and advanced AI services..."

And Microsoft who have already locked in many institutions to their Teams App with all kinds of AI add ons telling education institutions they must upgrade their cloud contracts for purposes of data governance when they use generative AI even more

AI is increasingly seen as a vehicle to expand the cloud business in education, says Williamson, locking education institutions in to hard-to-cancel cloud contracts under the guise of claims about AI efficiencies and improvements in outcomes (unproven as they are). He believes AI in education can't be separated from the cloud biz model.

In an article on his substack newsletter Edward Ongweso Jr points out: "These technologies are complex: their origins, their development, the motivations driving their financing, the political projects they’re connected to, the products they’re integrated."

This shows a need to go beyond present understandings of AI Literacy to understand the activities, intentions and impact of the big technology companies. And for education, it further suggest the need to develop our own applications, based on open source software and independent from a reliance on these companies. Open AI which started with a mission to develop AI to benefit society, now makes no pretense of its profit driven motivation and if that means privatizing education that is not a barrier.

About the image

This image shows a gradual transformation from fish to woman and vice versa - questioning the rigid boundaries of classification and emphasising the fluid, in-between states where entities cannot be neatly boxed into one category or the other. It is particularly reminiscent of early image-generation GAN models, where images could be generated at the midpoint between two concepts in latent space, outputting entertaining visuals while highlighting the complexity, ambiguity, and limits of data labelling.

AI – Productivity, Jobs and Skills

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Textiles and Tech 1 / CC-BY 4.0

Much of the big excitement about Generative AI was driven by the idea that it would boost productivity (and thus profit). Conversely one of the fears was that it would lead to job losses although there was little or no consensus about how severe such job losses might be and indeed some commentators speculated that new jobs created by AI would balance out the losses.

Early research and reports into the impact of AI were conflicted, with increasing levels of hype perhaps overwhelming more sober research findings. And even now there is only a limited consensus of the impact of Generative AI on employment. Lets look first at productivity. Early research has tended to emphasize that less experienced staff have gained most from using AI,with only limited gain from more senior employees, although of course there are big differences between sectors and occupations. A recent report – Reclaim your Day – the impact of AI PCs on Productivity - about a study by Intel, which tried to see if AI can save time and boost productivity, found that “current AI PC owners spend longer on tasks than their counterparts using traditional PCs.” According to the study, the users of these AIs spent a long time trying to identify “how best to communicate with AI tools to get the desired answers or response,” which is why they took longer. However, there is also a stark lack of data in the report on how much time was spent monitoring and correcting these AIs’ outputs. Despite this the study was optimistic, stating that people need to be better educated on using these AI tools.

Women in Technology has published a study by Sarah Writtenhouse entitled The Great Tech Job Migration is Upon Us - What you need to know about how jobs are adapting to the new tech climate (Paywalled), looking at how jobs in the software industry are changing. The software industry is interesting as thgi sis one of the sectors for which the big Gen AI companies have claimed big productivity savings. Software jobs were already in decline but Writtenhouse says that Software Development jobs postings on LinkedIn fell almost 25% in October this year, shrinking from 22,000 to just under 17,000. But not all is as it seems Writtenhouse says:

“These jobs are just evolving into the next generation of software development work by adding new skills to new job titles.

AI, ML, and Cloud Computing Engineers — Just new names for “Software Developer”

In terms of skills she says “Python, Java, and C++ are still core skills, but an added upskill to ML frameworks, cloud AI toolsets, and LLM models create new AI-centric development jobs… oops, I mean AI-centric engineering jobs.”

It seems AI Engineer postings rose sharply in October, increasing 55% from 10,000 to almost 16,000 from September with a doubling in openings for Cloud Computing Engineers and ML Engineers. Similarly there was an increase in demand for Data Analysts, Data Engineers, and Data Scientists.

I suspect that changes in skills demand and job titles may be more significant than overall employment in different sectors,. However this suggests that there is going to be higher levels of advanced skill training. It may well be that those working in the software industry are used to fast moving technology change but this may not be reflected in others sectors where professional training is needed to help employees keep up.

About the image

Textiles and Tech' intertwine the visual elements of circuits and textiles, merging the past and future, wires and strings. The collages draw inspiration from the history of 1960s Silicon Valley, where Navajo women were employed by Fairchild Semiconductor for their weaving expertise to assemble circuits that laid the groundwork for today’s microchips. By compiling archival images of hands, the series seeks to personify the anonymity of tech labor. The strings and wires running through the visuals encourage viewers to reflect: what is uncovered when we pull on these threads?

Social generative AI for education

Ariyana Ahmad & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0

I am very impressed with a paper, Towards social generative AI for education: theory, practices and ethics, by Mike Sharples. Here is a quick summary but I recommend to read the entire article.

In his paper, Mike Sharples explores the evolving landscape of generative AI in education by discussing different AI system approaches. He identifies several potential AI types that could transform learning interactions: generative AIs that act as possibility generators, argumentative opponents, design assistants, exploratory tools, and creative writing collaborators.

The research highlights that current AI systems primarily operate through individual prompt-response interactions. However, Sharples suggests the next significant advancement will be social generative AI capable of engaging in broader, more complex social interactions. This vision requires developing AI with sophisticated capabilities such as setting explicit goals, maintaining long-term memory, building persistent user models, reflecting on outputs, learning from mistakes, and explaining reasoning.

To achieve this, Sharples proposes developing hybrid AI systems that combine neural networks with symbolic AI technologies. These systems would need to integrate technical sophistication with ethical considerations, ensuring respectful engagement by giving learners control over their data and learning processes.

Importantly, the paper emphasizes that human teachers remain fundamental in this distributed system of human-AI interaction. They will continue to serve as conversation initiators, knowledge sources, and nurturing role models whose expertise and human touch cannot be replaced by technology.

The research raises critical philosophical questions about the future of learning: How can generative AI become a truly conversational learning tool? What ethical frameworks should guide these interactions? How do we design AI systems that can engage meaningfully while respecting human expertise?

Mike Sharples concludes by saying that designing new social AI systems for education requires more than fine tuning existing language models for educational purposes.

It requires building GenAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and experience with networked machine learning.