What will happen to jobs with the rise and rise of Generative AI

Photo by Xavier von Erlach on Unsplash

OK where to start? First what is Generative AI? It is the posh term for things like ChatGPT from OpenAI or Bard from Google. And these Generative AIs based on Large Language Models are fast being integrated into all kinds of applications starting out with the chatbot integrated into Microsoft Bing browser and Dalll-E just one of applications generating images from text or chat descriptions.

Predicting what will happen with jobs is a tricky business. Jobs have been threatened by successive waves of technology. In general the overall effect on employment appears to have been less than was predicted. Of course there was a vast shift in employment with the advent of mechanization in agriculture but that took place around the end of the 19th century at least in some countries. And its pretty easy to find jobs that have disappeared in recent times - for instance employment in video shops. But in general it appears that disruption has been less than predicted in various surveys and reports. Technology has been used to increase productivity - for example in shops using self checkouts and automated stock management - or has been used to complement working processes and tasks rather than substitute for workers and the generation of new jobs to work with the technology

But what is going to happen this time round with all sorts of predictions and speculation - not helped by no-one quite knowing what Generative AI is capable of and even harder what it will be able to do in the very near future. Bill Gates (the founder of Microsoft) has said the development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. There is too much press and media speculation to even sum up the general reaction to the release of these new AI models and applications although Stephen Downes is making a valiant attempt in his OLDaily newsletter. Personally I enjoyed UK restaurant critic, Jay Raynor's account in the Guardian newspaper of when he asked ChatGPT to write a restaurant review in his own inimitable style. Of course, along with concerns over the impact on employment and jobs, there is much concern over the ethical implications of the new AI models although it is worth noting Ilkka Tuomi writing on LinkedIn (his posts are well worth following) has noted that the EU has been an early mover in policy and regulation. Ilkka also, while noting that education (and teaching) is more than just knowledge transformation, says "dialogue and learning by teaching are very powerful pedagogical approaches and generative AI can be used in many different ways in learning and education:. He concludes by saying: "This really could have a transformative impact."

Anyway back to the more general impact on jobs which is an issue for the new EU AI Pioneers project which focuses on the impact on Vocational Education and Training and Adult Education. Last weekend saw the release of a report by Goldman Sachs predicating that as many as 300 million jobs could be affected by generative AI and the labor market could face significant disruption. However they suggest that :most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI". In the US they estimate 7% of jobs could be replaced by AI, with 63% being complemented by AI and 30% being unaffected by it. Perhaps one of the reasons for so much concern is that this wave of automation seems to be most likely to impact on skilled work with, say Goldman Sachs, office and administrative support positions at the greatest risk of task replacement (46%(, followed by legal positions (44%) and architecture and engineering jobs (37%).

What I found most interesting from the full report (rather than the press summaries) is the methodology. The report includes a quite detailed description. It says:

Generative AI’s ability to 1) generate new content that is indistinguishable from human-created output and 2) break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects.

The report is based on "data from the O*NET database on the task content of over 900 occupations in the US (and later extend to over 2000 occupations in the European ESCO database) to estimate the
share of total work exposed to labor-saving automation by AI by occupation and industry." They assume that AI is capable of completing tasks up to a difficulty of 4 on the 7-point O*NET “level” scale and
"then take an importance- and complexity-weighted average of essential work tasks for each occupation and estimate the share of each occupation’s total workload that AI has the potential to replace." They "further assume that occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI."

What are the implications for Vocational Education and Training and Adult Education? It seems clear that very significant number of workers are going to need some form of training or Professional Development - at a general level for working with AI and at a more specific level for undertaking new work tasks with AI. There is little to suggest present education and training systems in Europe can meet these needs, even if we expect a ramping up of online provision. The EU's position seems to be to push the development of Microcredentials which according the the EU Cedefop agency "are seen to be fit for purposes such as addressing the needs of the labour market, lifelong learning, upskilling and reskilling, recognising prior learning, and widening access to a greater variety of learners. Yet in their recent report, they say that

"Microcredentials tend to be a flexible, demand-driven response to the need for skills in the labour market, but they can lack the same trust and recognition enjoyed by full qualifications. In terms of whether and how they might be accommodated within qualification systems, they can pose important questions about how to guarantee their value and currency without undermining both their own flexibility and the stability and dependability of established qualifications."

The need for new skills for AI pose a question for how curricula can be adapted and updated faster than has been done traditionally. And they pose major questions for institutions to adapting course provsion to to new skill needs at a local and regional level as well as national. Of course there are major challenges for the skills and competences of teachers and trainers, who, the AI and VET project found, were generally receptive to embracing AI for teaching and learning as well as new curricula content, but felt the need for more support and professional training to update their own skills and knowledge (and this was before the launch of Generative AI models.

All in all, there is a lot to think about here.

AI, vocational educational education and training and the International Baccalaureate

There is really only one story in town when it comes to education technology. After years of forecasting the rise of AI and not a lot happening the release of ChatGPT and Generative AI programmes has generally panicked institutions worldwide. Indeed, it may seem strange in the future that so much of what was considered learning rested on the essay. Interestingly though, Vocational Education and Training does not suffer from the same obsession, although in some countries VET programmes include school based learning. The issue for VET is how to measure practical competence and AI shows little sign of being able to do that. But at the same time Generative AI will have immense impact on Vocational Education and Training, in that the curricula for almost every occupational subject will need renewal to reflect in introduction of AI in work processes.

Meanwhile the International Baccalaureate has bucked the trend from exam bodies and is embracing the new world of AI. In a statement it said:

The IB believes that artificial intelligence (AI) technology will become part of our everyday lives—like spell checkers, translation software and calculators. We, therefore, need to adapt and transform our educational programmes and assessment practices so that students can use these new AI tools ethically and effectively……

Students should be aware that the IB does not regard any work produced—even only in part—by such tools, to be their own. Therefore, as with any quote or material from another source, it must be clear that AI-generated text, image or graph included in a piece of work, has been copied from such software. The software must be credited in the body of the text and appropriately referenced in the bibliography. As with current practice, an essay which is predominantly quotes will not get many, if any, marks with an IB mark scheme. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography……

Essay writing is, however, being profoundly challenged by the rise of new technology and there’s no doubt that it will have much less prominence in the future…..we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity. These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.”

Public values are key to efficient education and research

For those of us who have been working on AI in Education it is a bit of a strange time. On the one hand it is not difficult any longer to interest policy makers, managers or teachers and trainers in AI. But on the other hand, at the moment AI seems to be conflated with the hype around Chat GPT. As one senior policy person said to me yesterday: "I hadn't even heard of Generative AI models until two weeks ago."

And of course there's a loge more things happening or about to happen on not just the AI side but in general developments and innovation with technology that is likely to impact on education. So much in fact that it is hard to keep up. But I think it is important to keep up and not just leave the developing technology to the tech researchers. And that is why I am ultra impressed with the new publication from the Netherlands SURF network - 'Tech Trends 2023'.

In the introduction they say

This trend report aims to help us understand the technological developments that are
going on around us, to make sense of our observations, and to inspire. We have chosen
the technology perspective to provide an overview of signals and trends, and to show
some examples of how the technology is evolving.

Surf scanned multiple trend reports and market intelligence services to identify the big technology themes. They continue:

We identified some major themes: Extended Realities, Quantum, Artificial intelligence,
Edge, Network, and advanced computing. We believe these themes cover the major technological developments that are relevant to research and education in the coming years.

But what I particularly like is for each trend the link to to public values and the readiness level as well. The values are taken from the diagram above. As SURF say "public values are key to efficient education
and research."

Chat GPT and Assessment

Photo by John Schnobrich on Unsplash

n the last few weeks the discussions about technology for education and learning have been dominated by the impact of GPT3 on the future of education – discussion which as Alexandra Mihai characterises in a blog entitled Lets get off the fear carousel as “hysteria”.

The way I see it, she says, is “academia’s response to ChatGPT is more about academic culture than about the tool itself.” As she posts out AI tools are not new and are already in use in a wide range of applications commonly used in education. But probably the most concern or even panic being seen about ChatGPT is in relation to assessment.

Alexandra draws attention to 7 things that the current debate reveals about our academic culture. Although she is focused on Higher Education much the same applies to Vocational Education and Training although I think that many teachers and trainers in VET may be more open to AI, given how it already plays a considerable role in the jobs vocational students are being trained for.

Her 7 things are:

  • Lots of pressure/ high workloads: regardless of our positions, everyone seems to be under a great amount of pressure to perform
  • Non-transparent procedures: university administration is very often a black box with missing or inefficient communication channels
  • Lack of trust in students: this very harmful narrative is unfortunately a premise for many educators, not entirely (or not always) out of bad will but rather stemming from a teacher-centred paradigm which emphasises the idea of control.
  • Stale quality assurance (QA) policies: quality assurance in education is a complex mix of many factors (including faculty professional development, technology integration academic integrity policies, to name just the more relevant ones for the current debate)
  • Inertia: the biggest enemy, in her opinion. Responding to change in a timely and efficient manner is not one of the strong points of HE institutions.
  • Technological determinism ): the only thing that is, she feels, equally if not more dangerous that banning technology is thinking it can solve all problems.

Alexandra wants us to “take a moment to actually talk to and really listen to our students?” She says: :”All this will help us understand them better and design learning experiences that make sense to them. Not necessarily assignments where they cannot cheat, but activities and assignments they genuinely want to engage in because they see them as relevant for their present and their future.”

In an earlier blog she invites us to select on two questions.

Firstly, how do you balance three assessment purposes – students’ expertise development, Backward design and constructive alignment and Feasibility for students, teachers and organisation.

Secondly how do you take into account the three principles for optimally balancing different assessment purposes, in order to guide students towards professional independence?

There is no shortage of resources on ChatGTP in education: a list which is growing by the day. Here is 5 that Alexandra suggests:

Assessment in the age of artificial intelligence– great article by Zachari Swiecki et al., with a lot of insights into how we can rethink assessment in a meaningful way:
Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT– interesting read by Debby Cotton et al., suggests a range of strategies that universities can adopt to ensure these tools are used ethically and responsibly;
Academic Integrity?- insightful reflection by Matthew Cheney on the concept of academic integrity and its ethical implications;
Critical AI: Adapting college writing for the age of language models such as ChatGPT: Some next steps for educators, by Anna Mills and Lauren Goodlad- a useful collection of practices and resources on language models, text generators and AI tools;
ChatGPT Advice Academics Can Use Now– very useful advice from various academics, compiled by Susan D’Agostino on how to harness the potential and avert the risks of AI technology.