Author: Graham Attwell
AI in Vocational Education and Training: the experience of a teacher
As part of the EU Eramus+ AI Pioneers project, a cross-sectoral project aiming at promoting the use and teaching of artificial intelligence (AI) in adult education and vocational training (VET) we are undertaking a series of interviews with vocational teachers and trainers and teachers in Adult Education.
This interview, undertaken in late May 2023 is with a computer science teacher from north Germany. The teacher has completed a full course of study as a computer scientist and is presently completing a doctorate degree. but has to complete a vocational teacher training course (Studienseminar in German). A German teacher presented ChatGPT at the teacher training course and showed how had developed an exercise sheet with it. The interviewee showed us a worksheet that he himself had created with ChatGPT. The difficulty, he explained lies in formulating the correct prompts. He said it was also important to keep the prompts as short as possible and to use as few technical terms as possible. ChatGPT, he said, is only as good as the prompts you enter.
The time needed to generate worksheets is determined by constantly trying out and improving the prompts until the generated worksheet comes close to one's own ideas. The worksheet never reaches 100% of one's own ideas, so the workload consists of manual adjustments of the generated result. The worksheet also often contains technical errors that have to be corrected.
The interviewee rates ChatGPT as an auxiliary tool, which is particularly good at solving the time-consuming task of thinking up numerical relations for arithmetic problems. The interviewee estimates that the time needed for a worksheet can be reduced from more than a day to a few hours.
ChatGPT cannot insert photos, these can be generated by other software, e.g. deepai.org.
Longer tasks can be generated, but the more complex and specialised the construct, the worse the result of ChatGPT. The worse the result, the greater the subsequent revision effort For more accurate results, it is advisable to let ChatGPT create small sections, which are then merged manually.
There is no debate about the ethical use of AI at the school so far. Many teachers are completely unfamiliar with AI-based methods. Younger teachers use AI for lesson preparation, older teachers reject it as a non-self-performing service for both students and teachers and want to ban its use. It is not openly discussed among students or teachers and the use is considered incorrect by many.
The interviewee considers the use of ChatGPT should become a regular method and be taught to both students and teachers in their training. In China, he says, students are taught how to use AI from an early age, whereas in north Germany, Word is taught at an advanced age, which is not computer science in the opinion of a computer scientist.
The interviewee considers a network around AI in education and training like the one planned by the AI Pioneers project to be enormously important and would also formulate his experiences to date as best practice.
Is Emergence a mirage
Nobel Prize-winning physicist P.W. Anderson’s “More Is Different” argues that as the complexity of a system increases, new properties may materialize that cannot (easily or at all) be predicted, even from a precise quantitative under-standing of the system’s microscopic details. As Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo from Stanford University explain in a recently published paper "Emergence has recently gained significant attention in machine learning due to observations that large language models, e.g., GPT, PaLM, LaMDA can exhibit so-called “emergent abilities” across diverse tasks." It has been argued that large language models display emergent abilities not present in smaller-scale models, justifying the huge financial and environmental cost of developing these models.
Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo "present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale."
Their paper, Are Emergent Abilities of Large Language Models a Mirage?, is quite technical but very well written and important for understanding the debate around AI.
#AIinEd – Pontydysgu EU 2023-05-08 12:45:32
As the trailer says: "In this video, you will witness a fascinating discussion between Socrates, the Greek philosopher considered one of the greatest thinkers in history, and Bill Gates, the American entrepreneur and founder of Microsoft, one of the most important companies in the world of technology. Despite belonging to different eras, Socrates and Gates have a lot in common. Both are considered pioneers in their respective fields and have had a significant impact on society. It is interesting that the 'conversation; centres on the benefits (or not) of AI for education and learning.
Good critical and sceptical work on AI in education
I've commented before on the depth of division in commentary and research on the use of AI in education since the release of ChatGPT and subsequent applications based on Large Language Models. As the MIT Technology review has reported, "Los Angeles Unified, the second-largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network" and "by January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia." But then continued, "many teachers now believe, ChatGPT could actually help make education better.
Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more."
But rather than take sides in a polarised debate. Ben Williamson, who researches and writes about education, digital tech, data and policy at the University of Edinburgh, believes we need to develop "Good critical and sceptical work on AI in education.' In a series of toots (the Mastodon nomenclature for Tweets) on the Mastodon social network put forward the following ideas for research into AI in education.
- Is AI in education really doing what it claims? Do LLM-enabled chatbots improve learning? Do personalized learning algorithms actually personalize, or just cluster by historical patterns? Is it even "AI" or just some shitty stats?
- What's the political economy of AI in education? Even if LLM chatbots in EdTech are great, how does that link with wider digital economy developments? What policy enablers are in place to facilitate AI in education? What policy-influencing networks are forming around AIED? Why does it get so much funding, in which geographical regions, and from which sources?
- What's the science behind AI in education? AI and education have a 60-year history, taking in cybernetics, cognitivism and computing, then learning science, learning analytics, and education data science, with doses of behaviourism and nudge theory along the way, and now machine learning and neural networks - this is a hefty accumulation demanding much better understanding.
- What kind of infrastructuring of education does AI in education require? You put LLMs into EdTech vis APIs then you are building on an infrastructure stack to run your platform. That puts schools on the stack too. What are the implications, long-term, of these Big Tech lock-ins? Will schools be governed not just by EdTech but by Big Tech AI vendors and their APIs?
- What are the rights, justice, ethics and regulatory implications of AI in education? Can EdTech be designed for justice? Could algorithms be repurposed for reparative projects rather than discriminatory outcomes? Have AIED ethics frameworks been compromised? Is there scope for more democratic participation in building AI for education products? Can we be hopeful of better things from this technically remarkable but socially troubling tech?
"Just some thoughts to work on…", he concluded. These seem a pretty good starting point, not just for Higher Education, but for those of working on AI and Vocational Education and Training and in Adult Education, as we are doing in the European AI PIoneers Project.