LLMs are a cultural technology

Yutong Liu & Kingston School of Art / Better Images of AI / Exploring AI / CC-BY 4.0

John Naughton writing in the Guardian says:

Assessment in humanities in time of LLMs requires, "if not a change of heart, two changes of mindset.

The first is an acceptance that LLMs – as the distinguished Berkeley psychologist Alison Gopnik puts it – are “cultural technologies”, like writing, print, libraries and internet search. In other words, they are tools for human augmentation, not replacement.

Second, and more importantly perhaps, is a need to reinforce in students’ minds the importance of writing as a process."

GenAI and Assessment

As a recent publication from the Universitat Oberta de Catalunya points out, Artificial Intelligence remains an opportunity (or an excuse) to transform assessment, curriculum, teaching, personalization and teaching competencies. This is especially so in relation to assessment with widespread concern in the academic world about the near impossibility of detecting whether or not a student has used generative AI in an assignment.

The Universitat Oberta de Catalunya article explores the potential of continuous assessment aimed at self-regulation of learning. It suggests changing the assessment approach, moving from criteria focused on the assessment of the result to criteria focused on the process of development of the activity by the students.

    Furthermore it advocates designing continuous assessment activities as part of the same learning sequence, with relationships of dependency and complementarity, instead of discrete tests and focusing the activities on the development of competencies and the assessment of progress and reflection on the learning process of each student.

    Leon Furze is a prolific contributor to LinkedIn and describes his work as "Guiding educators through the practical and ethical implications of GenAI. Consultant & Author | PhD Candidate."

    Witting from the perspective of education in Australia he says:

    When it comes to GenAI, much of the conversation in education has been focused on academic achievement, perceived threats to academic integrity, and the risk that this technology poses to written assessments. I think that vocational education actually offers some fantastic alternative forms of assessment which are less vulnerable to generative artificial intelligence. If you’re not familiar with vocational education, assessments are often incredibly rigorous, sometimes to the point where the paperwork on evaluation and assessment is significantly longer than the assessment itself.

    Vocational training, by nature, is practical and geared around skills which are needed for the particular job role or discipline being studied. Mainstream education, by contrast, is focused predominately on subjects and content.

    Furze provides examples of different types of assessment in vocational educati9n and training:

    • Observation checklists
    • Role plays
    • Scenarios
    • Workplace activities
    • Reports from employers

    He has prublished a free 60 page ebook - Rethinking Assessment for GenAI which he says covers everything from ways to update assessments, to the reasons I advise against AI detection tools. 

    AI and Assessment

    Image by Mohamed Hassan from Pixabay

    Maybe the panic over the impact of AI on assessment in education has died down a little, but it has been useful in that it has focused attention on the puropse of assessment and the pedagogic approaches to assessment. Simon Brookes, Executive Dean, Faculty of Creative & Cultural Industries, at the University of Portsmouth in the UK has started a new blog series on Rethinking Assessment in the Age of AI. His latest post features insights from the University of Melbourne's Centre for the Study of Higher Education. Their recent guide, "Rethinking Assessment in Response to AI" (pdf) offers a thoughtful approach to redesigning assessments that maintain academic integrity without sacrificing pedagogical value, he says.

    The guide includes seven critical strategies for improving assessment design and integrity:

    1. Shift from product to process: Focus on evaluating students' thinking processes and problem-solving approaches rather than just the final output. This could involve asking students to maintain learning journals, document their research process, or explain their reasoning in solving problems.

    2. Incorporate evaluative judgement tasks: Ask students to review or evaluate work against set criteria, encouraging higher-order thinking skills. This might include peer review exercises, critiquing published works, or assessing case studies against industry standards.

    3. Design nested or staged assessments: Create assignments that build on each other throughout the semester, allowing for feedback and adaptation. For example, a research project could be broken down into proposal, literature review, draft, and final submission stages, each informing the next.

    4. Diversify assessment formats: Use various modalities, such as videos, blogs, podcasts, and animations, which are less susceptible to AI generation. This not only makes cheating more difficult but also allows students to develop a broader range of communication skills.

    5. Create authentic, context-specific assignments: Design tasks that mirror real-world scenarios or are highly specific to the subject matter. This could involve analysing local case studies, solving problems specific to your discipline, or applying theories to current events.

    6. Include more in-class and group assignments: Incorporate collaborative learning and reduce opportunities for individual cheating. This might involve group presentations, debates, or problem-solving sessions during class.

    7. Use oral interviews: Test understanding through verbal responses to unpredictable prompts, making it difficult to use AI. This could range from viva voce examinations to informal discussions about a student's work process.

    Generative AI, Assessment and the Future of Jobs and Careers

    Ten days ago, I was invited to make an online presentation as part of a series on AI for teachers and researchers in Kazakhstan. I talked with the organisers and they asked me if I could speak about AI and Assessment and AI and Careers. Two subjects seemed hard to me but I prepared presentation linking them together and somehow it made sense. The presentation was using a version of Zoom I had not seen before to enable interpretation. My slides were translated into Russian. This was a little stressful as I was changing the slides in Russian online and in English on a laptop at the same time. It was even more stressful that my TP Link to the internet went down after two minutes and I had to change room to get better connectivity!

    Anyway, it seemed to go well and there were good questions from the audience of about 150. Given that the recording was in Russian, I made a new English version. We still experimenting with the best way to do an audio track over slide decks and provide a Spanish translation so sorry that some of these slides are not perfect. But I hope you get the message.