Author: Graham Attwell
Generative AI and the Future of Education
Getting Digital Assessment Right
AI and Assessment
This podcast on AI and assessment was recorded for the Eramus+ eAssessment project. It forms part of an online course including four main sections:
- Assessment Strategies
- Analysing evidence
- Feedback and Planning
- Community of practice.
The course, hosted on the project Moodle site, is free of charge. Participants are welcome to follow the full course or can dip in and out of the sixteen different units.
The podcast discusses the impact of Generative AI and advanced chatbots like ChatGPT and Bing on education and how they are changing the way we assess learning.
Below is a transcript of the podcast.
Scene 1
In late November 2022, OpenAI dropped ChatGPT, a chatbot that can generate well-structured blocks of text several thousand words long on almost any topic it is asked about. Within days, the chatbot was widely denounced as a free essay-writing, test-taking tool that made it laughably easy to cheat on assignments. The response from schools and universities was swift and decisive. Los Angeles Unified, the second-largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network. Others soon joined. By January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia. Several leading universities in the UK, including Imperial College London and the University of Cambridge, issued statements that warned students against using ChatGPT to cheat.
Scene 2
This initial panic from the education sector was understandable. ChatGPT, available to the public via a web app, can answer questions and generate unique essays that are difficult to detect as machine-written. It looked as if ChatGPT would undermine the way we test what students have learned, a cornerstone of education.
However, there is a growing recognition that advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more. Educational-tech companies including Duolingo and Quizlet, which makes digital flashcards and practice assessments used by half of all high school students in the US, have already integrated OpenAI’s chatbot into their apps. And OpenAI has worked with educators to put together a fact sheet about ChatGPT’s potential impact in schools. The company says it also consulted educators when it developed a free tool to spot text written by a chatbot (though its accuracy is limited). Turnitin – a company leading in the provision of anti plagiarism software – already controversial – designed a new plug in supposed to alert to the possibility that assessments were using AI. Its accuracy is also questioned and Turnitin had to backtrack on not allowing users to opt out of new AI plugin.
Scene 3
“We need to be asking what we need to do to prepare young people—learners—for a future world that’s not that far in the future,” says Richard Culatta, CEO of the International Society for Technology in Education (ISTE), a nonprofit that advocates for the use of technology in teaching.
Take cheating. In Crompton’s view, if ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot.
Scene 4
In response to the limitations of traditional assessment methods, there is a growing movement towards authentic and personalized assessment methods in courses. These methods include using real-life examples and contextually specific situations that are meaningful to individual students. Instructors may ask students to include their personal experience or perspectives in their writing. Students can also be asked to conduct analysis that draws on specific class discussions. Alternatives to essay-based assessment also need to be further explored. These methods can include using (impromptu) video presentations for assessments or using other digital forms such as animations. Through self-assessment or reflective writing, students could discuss their writing or thinking process. Additionally, peer evaluations or interactive assessment activities could be integrated into grading by engaging students in group discussions or other activities such as research and analysis in which students are expected to co-construct knowledge and apply certain skills. Instructors may consider placing an emphasis on assessing the process of learning rather than the outcome.
Scene 5
Most of the commentary and research into the impact of Generative AI so far comes from school and higher education. What bout vocational education and training. Well, firstly I would argue that VET has been better at formative assessment than in general education. And in many countries VET is based on students being able to demonstrate outcomes, usually through practical tests – this is much closer to authentic assessment. Of course, in many systems VET students are also required. To produce evidence of knowledge as well as practice and this may need some changes. But for vocational education and training, the challenge may not be so much how we are assessing but what we are assessing. Leaving aside the discussion about whether and AI is going to threaten jobs and affect employment – and to be honest no one is really sure to what extent this will happen and how many new jobs may be generated. But it is pretty clear that AI is going to impact on the content and tasks of many jobs…….. Vocational Education and Training has a key purpose in preparing students for the world of work. If that work world include people using AI in their occupation then that will form a key part of the curriculum. And authentic assessment means assessinghow they are working with AI in the work tasks of the future.
Scene 6
Lets sum up. We need to change how we assess learning. Did ChatGPT kill assessments? They were probably already dead, and they’ve been in zombie mode for a long time.
Advancing theory in the age of artificial intelligence
The British Journal of Educational Technology (BEJET) has published a special section around Ai in education entitled 'Advancing theory in the age of artificial intelligence'.
In the introduction of the same name (and which is free to access) the authors, Shane Dawson, Srecko Joksimovic, Caitlin Mills, Dragan Gašević, and George Siemens. "To address the need of effective deployment of AI systems in education", they say, "a theoretical lens is required to guide and direct both research and practice. Theory provides the guard rails to ensure that principles, values and trusted constructs shape the use of AI in educational settings, ensuring that values, existing research, concerns of multiple stakeholders and on-going contributions to science remain centre stage."
They go on to explain that "the papers in this special section argue for the criticality of theory in the design, development and deployment of AI in education. In so doing, we question the continued relevance and value of existing theories of learning when AI becomes prominent in classrooms. We call for new frameworks, models and ways of thinking; ones that include the presence of non- human agents that are more like an active partner than a simple technology, resulting in important questions about revising existing team-based and collaborative theories of learning."
There looks like much of value in this special section, although it is a pity that only the introduction is accessible for free. From looking at the index it seems like a lot of attention is being paid. to research from Learning Analytics. And once more the research seems to be based on higher education. I wonder if the new learning theories they are looking for may be based in vocational educational practice particularly at the intersection between knowledge and practice.