Homework Apocolypse?

Catherine Breslin & Tania Duarte / Better Images of AI / AI silicon clouds collage / CC-BY 4.0

November marks two years since the release of Open AI's GPT large language model chatbot. Since then AI, or more specifically Generative AI has dominated the discourse over the future of education. And of course it has spawned hundreds of project resulting in an increasing torrent of research results. Yet on one critical issue - does the use of AI improve learning - there appears little consensus. This is probably because we have no good ways of measuring learning. Instead we use performance in tests and exams as a proxy for learning. And its probably true to say that the debates over AI are turning the heat on the use of such a proxy, just as it is on the essay as the dominant form of assessment in schools and universities.

Last week in his newsletter, One Useful thing, Ethan Mollick talked about the use of AI, cheating and learning in an article entitled 'What comes after the Homework Apocalypse'. It is probably fair to say Ethan is a big fan of AI in education.

To be clear, AI is not the root cause of cheating. Cheating happens because schoolwork is hard and high stakes. And schoolwork is hard and high stakes because learning is not always fun and forms of extrinsic motivation, like grades, are often required to get people to learn. People are exquisitely good at figuring out ways to avoid things they don’t like to do, and, as a major new analysis shows, most people don’t like mental effort. So, they delegate some of that effort to the AI. In general, I am in favor of delegating tasks to AI (the subject of my new class on MasterClass), but education is different - the effort is the point.

He postulated that fall in grades achieved by students in the USA between 2008 and 2017 had been caused by the increasing use of the Internet for homework. Students were simply copying homework answers. And in an experiment in ma high school in Turkey with students using GPT4 grades for homework went up but final exam grades fell. But giving students GPT with a basic tutor prompt for ChatGPT, instead of having them use ChatGPT on their own, boosted homework scores without lowering final exam grades. 

Ethan says this shows "we need to center teachers in the process of using AI, rather than just leaving AI to students (or to those who dream of replacing teachers entirely). We know that almost three-quarters of teachers are already using AI for work, but we have just started to learn the most effective ways for teachers to use AI."

He remains convinced to the value of Generative AI in education. The question now, he says "is not whether AI will change education, but how we will shape that change to create a more effective, equitable, and engaging learning environment for all."

AI Competency Framework for teachers

At last week's Digital Learning Week 2024, UNESCO formally launched two AI Competence Frameworks, one for teachers and the other for students. These frameworks aim to guide countries in supporting students and teachers to understand the potential as well as risks of AI in order to engage with it in a safe, ethical and responsible manner in education and beyond.

Above is a copy of Tim Evans popular poster  summarizing the AI Competency Framework for Teachers. He says "I've taken the extensive, lengthy report and attempted to gather my take on the 10 key points, and areas of focus." Tim has also made a copy of the poster available on Canva.

AI: What do teachers want?

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI / CC-BY 4.0

A quick post in follow up to my article yesterday on the proposals by the UK Department for Education to commission tech companies to develop an AI app for teachers to save them time. The Algorithm - a newsletter from MIT Technology Review picked up on this today, saying "this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver."

The article goes on to ask how willing teachers are to turn over some of their responsibilities to an AI model? The answer, they say, really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.

“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”

Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans that conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability. 

Companies promising time savings for planning and grading “is a huge red flag, because those are core parts of the profession,” he says. “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback for creative skills like writing is controversial too. “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.” 

Should tech companies be given government documents to train their tools for education?

Photo by Andrew Neel on Unsplash

Last week the new UK government announced a new project that they say will enhance AI's ability to assist teachers in marking work and planning lessons.

The press release says:

  • Teaching standards, guidelines and lesson plans will form a new optimised content store which will train generative AI to make it more reliable for teachers in England
  • new project will bring teachers and tech companies together to develop and use trustworthy AI tools that can help mark homework and save teachers time
  • comes as new research shows parents want teachers to use AI to reduce out of hours work and boost time spent teaching children

The government is investing £4 million in the project to pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools. 

The content store, they say, is targeted at technology companies specialising in education to build tools which will help teachers mark work, create teaching materials for use in the classroom and assist with routine school admin. 

There is not unanimous support for the announcement. UK teachers have been protesting about high workloads over a prolonged period of time, with substantial numbers leaving the profession. And amongst the flood of AI releases targeted at education, tools like teachermatic to support teachers have been relatively successful in the UK. But concerns include giving more funding and ultimately power to teh tech industry as well as providing them with student data, even if anonymized. Another question is whether the development of AI based on a national curriculum (and it is important to remember that Wales and Scotland have separate and different curricula) may lead towards an overly centralised curriculum, with AI providing less diverse learning materials.