Public values are key to efficient education and research

For those of us who have been working on AI in Education it is a bit of a strange time. On the one hand it is not difficult any longer to interest policy makers, managers or teachers and trainers in AI. But on the other hand, at the moment AI seems to be conflated with the hype around Chat GPT. As one senior policy person said to me yesterday: "I hadn't even heard of Generative AI models until two weeks ago."

And of course there's a loge more things happening or about to happen on not just the AI side but in general developments and innovation with technology that is likely to impact on education. So much in fact that it is hard to keep up. But I think it is important to keep up and not just leave the developing technology to the tech researchers. And that is why I am ultra impressed with the new publication from the Netherlands SURF network - 'Tech Trends 2023'.

In the introduction they say

This trend report aims to help us understand the technological developments that are
going on around us, to make sense of our observations, and to inspire. We have chosen
the technology perspective to provide an overview of signals and trends, and to show
some examples of how the technology is evolving.

Surf scanned multiple trend reports and market intelligence services to identify the big technology themes. They continue:

We identified some major themes: Extended Realities, Quantum, Artificial intelligence,
Edge, Network, and advanced computing. We believe these themes cover the major technological developments that are relevant to research and education in the coming years.

But what I particularly like is for each trend the link to to public values and the readiness level as well. The values are taken from the diagram above. As SURF say "public values are key to efficient education
and research."

Chat GPT and Assessment

Photo by John Schnobrich on Unsplash

n the last few weeks the discussions about technology for education and learning have been dominated by the impact of GPT3 on the future of education – discussion which as Alexandra Mihai characterises in a blog entitled Lets get off the fear carousel as “hysteria”.

The way I see it, she says, is “academia’s response to ChatGPT is more about academic culture than about the tool itself.” As she posts out AI tools are not new and are already in use in a wide range of applications commonly used in education. But probably the most concern or even panic being seen about ChatGPT is in relation to assessment.

Alexandra draws attention to 7 things that the current debate reveals about our academic culture. Although she is focused on Higher Education much the same applies to Vocational Education and Training although I think that many teachers and trainers in VET may be more open to AI, given how it already plays a considerable role in the jobs vocational students are being trained for.

Her 7 things are:

  • Lots of pressure/ high workloads: regardless of our positions, everyone seems to be under a great amount of pressure to perform
  • Non-transparent procedures: university administration is very often a black box with missing or inefficient communication channels
  • Lack of trust in students: this very harmful narrative is unfortunately a premise for many educators, not entirely (or not always) out of bad will but rather stemming from a teacher-centred paradigm which emphasises the idea of control.
  • Stale quality assurance (QA) policies: quality assurance in education is a complex mix of many factors (including faculty professional development, technology integration academic integrity policies, to name just the more relevant ones for the current debate)
  • Inertia: the biggest enemy, in her opinion. Responding to change in a timely and efficient manner is not one of the strong points of HE institutions.
  • Technological determinism ): the only thing that is, she feels, equally if not more dangerous that banning technology is thinking it can solve all problems.

Alexandra wants us to “take a moment to actually talk to and really listen to our students?” She says: :”All this will help us understand them better and design learning experiences that make sense to them. Not necessarily assignments where they cannot cheat, but activities and assignments they genuinely want to engage in because they see them as relevant for their present and their future.”

In an earlier blog she invites us to select on two questions.

Firstly, how do you balance three assessment purposes – students’ expertise development, Backward design and constructive alignment and Feasibility for students, teachers and organisation.

Secondly how do you take into account the three principles for optimally balancing different assessment purposes, in order to guide students towards professional independence?

There is no shortage of resources on ChatGTP in education: a list which is growing by the day. Here is 5 that Alexandra suggests:

Assessment in the age of artificial intelligence– great article by Zachari Swiecki et al., with a lot of insights into how we can rethink assessment in a meaningful way:
Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT– interesting read by Debby Cotton et al., suggests a range of strategies that universities can adopt to ensure these tools are used ethically and responsibly;
Academic Integrity?- insightful reflection by Matthew Cheney on the concept of academic integrity and its ethical implications;
Critical AI: Adapting college writing for the age of language models such as ChatGPT: Some next steps for educators, by Anna Mills and Lauren Goodlad- a useful collection of practices and resources on language models, text generators and AI tools;
ChatGPT Advice Academics Can Use Now– very useful advice from various academics, compiled by Susan D’Agostino on how to harness the potential and avert the risks of AI technology.