What will happen to jobs with the rise and rise of Generative AI

Photo by Xavier von Erlach on Unsplash

OK where to start? First what is Generative AI? It is the posh term for things like ChatGPT from OpenAI or Bard from Google. And these Generative AIs based on Large Language Models are fast being integrated into all kinds of applications starting out with the chatbot integrated into Microsoft Bing browser and Dalll-E just one of applications generating images from text or chat descriptions.

Predicting what will happen with jobs is a tricky business. Jobs have been threatened by successive waves of technology. In general the overall effect on employment appears to have been less than was predicted. Of course there was a vast shift in employment with the advent of mechanization in agriculture but that took place around the end of the 19th century at least in some countries. And its pretty easy to find jobs that have disappeared in recent times - for instance employment in video shops. But in general it appears that disruption has been less than predicted in various surveys and reports. Technology has been used to increase productivity - for example in shops using self checkouts and automated stock management - or has been used to complement working processes and tasks rather than substitute for workers and the generation of new jobs to work with the technology

But what is going to happen this time round with all sorts of predictions and speculation - not helped by no-one quite knowing what Generative AI is capable of and even harder what it will be able to do in the very near future. Bill Gates (the founder of Microsoft) has said the development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. There is too much press and media speculation to even sum up the general reaction to the release of these new AI models and applications although Stephen Downes is making a valiant attempt in his OLDaily newsletter. Personally I enjoyed UK restaurant critic, Jay Raynor's account in the Guardian newspaper of when he asked ChatGPT to write a restaurant review in his own inimitable style. Of course, along with concerns over the impact on employment and jobs, there is much concern over the ethical implications of the new AI models although it is worth noting Ilkka Tuomi writing on LinkedIn (his posts are well worth following) has noted that the EU has been an early mover in policy and regulation. Ilkka also, while noting that education (and teaching) is more than just knowledge transformation, says "dialogue and learning by teaching are very powerful pedagogical approaches and generative AI can be used in many different ways in learning and education:. He concludes by saying: "This really could have a transformative impact."

Anyway back to the more general impact on jobs which is an issue for the new EU AI Pioneers project which focuses on the impact on Vocational Education and Training and Adult Education. Last weekend saw the release of a report by Goldman Sachs predicating that as many as 300 million jobs could be affected by generative AI and the labor market could face significant disruption. However they suggest that :most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI". In the US they estimate 7% of jobs could be replaced by AI, with 63% being complemented by AI and 30% being unaffected by it. Perhaps one of the reasons for so much concern is that this wave of automation seems to be most likely to impact on skilled work with, say Goldman Sachs, office and administrative support positions at the greatest risk of task replacement (46%(, followed by legal positions (44%) and architecture and engineering jobs (37%).

What I found most interesting from the full report (rather than the press summaries) is the methodology. The report includes a quite detailed description. It says:

Generative AI’s ability to 1) generate new content that is indistinguishable from human-created output and 2) break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects.

The report is based on "data from the O*NET database on the task content of over 900 occupations in the US (and later extend to over 2000 occupations in the European ESCO database) to estimate the
share of total work exposed to labor-saving automation by AI by occupation and industry." They assume that AI is capable of completing tasks up to a difficulty of 4 on the 7-point O*NET “level” scale and
"then take an importance- and complexity-weighted average of essential work tasks for each occupation and estimate the share of each occupation’s total workload that AI has the potential to replace." They "further assume that occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI."

What are the implications for Vocational Education and Training and Adult Education? It seems clear that very significant number of workers are going to need some form of training or Professional Development - at a general level for working with AI and at a more specific level for undertaking new work tasks with AI. There is little to suggest present education and training systems in Europe can meet these needs, even if we expect a ramping up of online provision. The EU's position seems to be to push the development of Microcredentials which according the the EU Cedefop agency "are seen to be fit for purposes such as addressing the needs of the labour market, lifelong learning, upskilling and reskilling, recognising prior learning, and widening access to a greater variety of learners. Yet in their recent report, they say that

"Microcredentials tend to be a flexible, demand-driven response to the need for skills in the labour market, but they can lack the same trust and recognition enjoyed by full qualifications. In terms of whether and how they might be accommodated within qualification systems, they can pose important questions about how to guarantee their value and currency without undermining both their own flexibility and the stability and dependability of established qualifications."

The need for new skills for AI pose a question for how curricula can be adapted and updated faster than has been done traditionally. And they pose major questions for institutions to adapting course provsion to to new skill needs at a local and regional level as well as national. Of course there are major challenges for the skills and competences of teachers and trainers, who, the AI and VET project found, were generally receptive to embracing AI for teaching and learning as well as new curricula content, but felt the need for more support and professional training to update their own skills and knowledge (and this was before the launch of Generative AI models.

All in all, there is a lot to think about here.

Chat GPT and Assessment

Photo by John Schnobrich on Unsplash

n the last few weeks the discussions about technology for education and learning have been dominated by the impact of GPT3 on the future of education – discussion which as Alexandra Mihai characterises in a blog entitled Lets get off the fear carousel as “hysteria”.

The way I see it, she says, is “academia’s response to ChatGPT is more about academic culture than about the tool itself.” As she posts out AI tools are not new and are already in use in a wide range of applications commonly used in education. But probably the most concern or even panic being seen about ChatGPT is in relation to assessment.

Alexandra draws attention to 7 things that the current debate reveals about our academic culture. Although she is focused on Higher Education much the same applies to Vocational Education and Training although I think that many teachers and trainers in VET may be more open to AI, given how it already plays a considerable role in the jobs vocational students are being trained for.

Her 7 things are:

  • Lots of pressure/ high workloads: regardless of our positions, everyone seems to be under a great amount of pressure to perform
  • Non-transparent procedures: university administration is very often a black box with missing or inefficient communication channels
  • Lack of trust in students: this very harmful narrative is unfortunately a premise for many educators, not entirely (or not always) out of bad will but rather stemming from a teacher-centred paradigm which emphasises the idea of control.
  • Stale quality assurance (QA) policies: quality assurance in education is a complex mix of many factors (including faculty professional development, technology integration academic integrity policies, to name just the more relevant ones for the current debate)
  • Inertia: the biggest enemy, in her opinion. Responding to change in a timely and efficient manner is not one of the strong points of HE institutions.
  • Technological determinism ): the only thing that is, she feels, equally if not more dangerous that banning technology is thinking it can solve all problems.

Alexandra wants us to “take a moment to actually talk to and really listen to our students?” She says: :”All this will help us understand them better and design learning experiences that make sense to them. Not necessarily assignments where they cannot cheat, but activities and assignments they genuinely want to engage in because they see them as relevant for their present and their future.”

In an earlier blog she invites us to select on two questions.

Firstly, how do you balance three assessment purposes – students’ expertise development, Backward design and constructive alignment and Feasibility for students, teachers and organisation.

Secondly how do you take into account the three principles for optimally balancing different assessment purposes, in order to guide students towards professional independence?

There is no shortage of resources on ChatGTP in education: a list which is growing by the day. Here is 5 that Alexandra suggests:

Assessment in the age of artificial intelligence– great article by Zachari Swiecki et al., with a lot of insights into how we can rethink assessment in a meaningful way:
Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT– interesting read by Debby Cotton et al., suggests a range of strategies that universities can adopt to ensure these tools are used ethically and responsibly;
Academic Integrity?- insightful reflection by Matthew Cheney on the concept of academic integrity and its ethical implications;
Critical AI: Adapting college writing for the age of language models such as ChatGPT: Some next steps for educators, by Anna Mills and Lauren Goodlad- a useful collection of practices and resources on language models, text generators and AI tools;
ChatGPT Advice Academics Can Use Now– very useful advice from various academics, compiled by Susan D’Agostino on how to harness the potential and avert the risks of AI technology.

AI and VET: MOOC update

I have spent a little time this morning looking at who participated in the MOOC we ran November and December last year on Artifical Intelligence and Vocational Education and Training. The MOOC was part of the Taccle AI project, funded under the Erasmus+ programme, which has just come to an end.

There were 246 enrolled participants in the German speaking MOOC and 154 in the English language version.

As might be expected most of the participants in the German language MOOC were from German speaking countries. 204 were from Germany and 29 from Switzerland. There were three participants each from Spain, Serbia, and Italy and 2 each from Greece and China. Although many were from education, especially vocational education and training schools, there were also participants from universities, companies, job centres and local and national government organisations.

Participants in the English language MOOC were from far more diverse and from countries around the world: in total 46 different countries! These were Germany, Australia, Romania, India, Uganda, Spain, Greece, Poland, Belgium, UK, France, Ghana, Albania, Mexico, Pakistan, Namibia, Jordan, Italy, Columbia, United Arab Emirates, Afghanistan, Indonesia, Bosnia and Herzegovina, Finland, Ethiopia, Egypt, Qatar, China, Trinidad and Tobago, Turkey, Portugal, Bangladesh, Guinea, Nigeria, United States, Malaysia, Switzerland, Netherlands, Ireland, Poland, Denmark, Hungary, New Zealand, Lithuania, Japan, Vietnam, Sri Lanka.

The MOOC is currently being translated in to Russian. If anyone else is interested in translating the MOOC or in reusing parts of it, just get in touch (I would be especially interested if anyone wanted to work with me in translating the contents to Spanish). Everything (r nearly everything, is under a Creative Commons license.

And if you missed out last year, we are planning to reopen the MOOC platform in February.

Artificial Intelligence and ethics

I have written before that despite the obvious ethical issues posed by Artificial Intelligence in general - and particular issues for education - I am not convinced by the various frameworks setting down rubrics for ethics, often voluntarily and often  developed by professionals from within the AI industry. But I am encouraged by UK Association for Learning Technology's (ALT) Framework for Ethical Learning Technology, released at their annual conference last week. Importantly, it builds on ALT’s professional accreditation framework, CMALT, which has been expanded to include ethical considerations for professional practice and research.

ALT say:

ALT’s Framework for Ethical Learning Technology (FELT) is designed to support individuals, organisations and industry in the ethical use of learning technology across sectors. It forms part of ALT’s strategic aim to strengthen recognition and representation for Learning Technology professionals from all sectors.  The need for such a framework has become increasingly urgent as Learning Technology has been adopted on a larger scale than ever before and as the leading professional body for Learning Technology in the UK, representing 3,500 Members, ALT is well placed to lead this effort. We define Learning Technology as the broad range of communication, information and related technologies that are used to support learning, teaching and assessment. We recognise the wider context of Learning Technology policy, theory and history as fundamental to its ethical, equitable and fair use.

More details and resources are available on the ALT website.