Homework Apocolypse?

Catherine Breslin & Tania Duarte / Better Images of AI / AI silicon clouds collage / CC-BY 4.0

November marks two years since the release of Open AI's GPT large language model chatbot. Since then AI, or more specifically Generative AI has dominated the discourse over the future of education. And of course it has spawned hundreds of project resulting in an increasing torrent of research results. Yet on one critical issue - does the use of AI improve learning - there appears little consensus. This is probably because we have no good ways of measuring learning. Instead we use performance in tests and exams as a proxy for learning. And its probably true to say that the debates over AI are turning the heat on the use of such a proxy, just as it is on the essay as the dominant form of assessment in schools and universities.

Last week in his newsletter, One Useful thing, Ethan Mollick talked about the use of AI, cheating and learning in an article entitled 'What comes after the Homework Apocalypse'. It is probably fair to say Ethan is a big fan of AI in education.

To be clear, AI is not the root cause of cheating. Cheating happens because schoolwork is hard and high stakes. And schoolwork is hard and high stakes because learning is not always fun and forms of extrinsic motivation, like grades, are often required to get people to learn. People are exquisitely good at figuring out ways to avoid things they don’t like to do, and, as a major new analysis shows, most people don’t like mental effort. So, they delegate some of that effort to the AI. In general, I am in favor of delegating tasks to AI (the subject of my new class on MasterClass), but education is different - the effort is the point.

He postulated that fall in grades achieved by students in the USA between 2008 and 2017 had been caused by the increasing use of the Internet for homework. Students were simply copying homework answers. And in an experiment in ma high school in Turkey with students using GPT4 grades for homework went up but final exam grades fell. But giving students GPT with a basic tutor prompt for ChatGPT, instead of having them use ChatGPT on their own, boosted homework scores without lowering final exam grades. 

Ethan says this shows "we need to center teachers in the process of using AI, rather than just leaving AI to students (or to those who dream of replacing teachers entirely). We know that almost three-quarters of teachers are already using AI for work, but we have just started to learn the most effective ways for teachers to use AI."

He remains convinced to the value of Generative AI in education. The question now, he says "is not whether AI will change education, but how we will shape that change to create a more effective, equitable, and engaging learning environment for all."

AI Competency Framework for teachers

At last week's Digital Learning Week 2024, UNESCO formally launched two AI Competence Frameworks, one for teachers and the other for students. These frameworks aim to guide countries in supporting students and teachers to understand the potential as well as risks of AI in order to engage with it in a safe, ethical and responsible manner in education and beyond.

Above is a copy of Tim Evans popular poster  summarizing the AI Competency Framework for Teachers. He says "I've taken the extensive, lengthy report and attempted to gather my take on the 10 key points, and areas of focus." Tim has also made a copy of the poster available on Canva.

AI: What do teachers want?

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI / CC-BY 4.0

A quick post in follow up to my article yesterday on the proposals by the UK Department for Education to commission tech companies to develop an AI app for teachers to save them time. The Algorithm - a newsletter from MIT Technology Review picked up on this today, saying "this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver."

The article goes on to ask how willing teachers are to turn over some of their responsibilities to an AI model? The answer, they say, really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.

“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”

Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans that conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability. 

Companies promising time savings for planning and grading “is a huge red flag, because those are core parts of the profession,” he says. “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback for creative skills like writing is controversial too. “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.” 

Should tech companies be given government documents to train their tools for education?

Photo by Andrew Neel on Unsplash

Last week the new UK government announced a new project that they say will enhance AI's ability to assist teachers in marking work and planning lessons.

The press release says:

  • Teaching standards, guidelines and lesson plans will form a new optimised content store which will train generative AI to make it more reliable for teachers in England
  • new project will bring teachers and tech companies together to develop and use trustworthy AI tools that can help mark homework and save teachers time
  • comes as new research shows parents want teachers to use AI to reduce out of hours work and boost time spent teaching children

The government is investing £4 million in the project to pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools. 

The content store, they say, is targeted at technology companies specialising in education to build tools which will help teachers mark work, create teaching materials for use in the classroom and assist with routine school admin. 

There is not unanimous support for the announcement. UK teachers have been protesting about high workloads over a prolonged period of time, with substantial numbers leaving the profession. And amongst the flood of AI releases targeted at education, tools like teachermatic to support teachers have been relatively successful in the UK. But concerns include giving more funding and ultimately power to teh tech industry as well as providing them with student data, even if anonymized. Another question is whether the development of AI based on a national curriculum (and it is important to remember that Wales and Scotland have separate and different curricula) may lead towards an overly centralised curriculum, with AI providing less diverse learning materials.

Seeking the Soul of Open Education in the Era of Gen AI

An intense debate has opened up on the Creative Commons Open Education email list. This extends discussions which have been brewing for some time about whether Open Education practitioners should support or fight against Large Language Model developers scraping web publications without either attribution or positive permissions for training data for Gen AI.

This week the debate heated up following the advertisement of a webinar featuring a presentation by Dave Wiley:

The University of Regina's OEP Program invites you to a special online presentation by Dr. David Wiley. Dr. Wiley is widely recognized as one of the founders of and key thinkers surrounding the open movement in education.

Date: Thursday September 19, 2024

Abstract:

For over 25 years, the primary goal of the open education movement has been increasing access to educational opportunity. And from the beginning of the movement the primary tactic for accomplishing this goal has been creating and sharing OER. However, using generative AI is a demonstrably more powerful and effective way to increase access to educational opportunity. Consequently, if we are to remain true to our overall goal, we must begin shifting our focus from OER to generative AI.

There was near instant kickback on the list. Heather Ross wrote:

I’m really troubled by so many in the open movement seeing GenAI as a natural fit with OER. OER aligns with several of the UN SDGs and is being used to integrate sustainability into curriculum, teaching about how all disciplines are tied to the SDGs. GenAI is an environmental nightmare. OER is being used to integrate EDI and Indigenization into curriculum. GenAI, programmed by those of dominant groups, often fails to represent or misrepresents members of marginalized communities. Taking what isn’t yours to create something new without giving credit, having permission, or considering the impact on others isn’t innovation or acting in the spirit of open. It’s colonization. OER has always called for recognition of the work’s creators and contributors and gratitude for their willingness to share it openly. Any gratitude toward GenAI-created work that was taught on copyrighted works against the copyright holder’s permission will ring hollow. During my comprehensive exam, a committee member asked me what the difference between OER and Napster was. At the time, that was easy to answer. Most OER was created by authors who willingly released their work with an open license. Napster was the sharing of music without the artist’s permission. If I were asked that question now, it would be a lot harder to answer.

And Dave Wiley came back to say:

It feels like we spent the second full decade of the OER movement, from 2008 - 2018, running non-stop workshops about copyright and the Creative Commons licenses. We had to spend ten years that way because there are certain fundamentals about copyright and licensing that a person has to understand before they can participate in the OER movement in a way that goes beyond reusing content created by others.

The same is true for generative AI. People who want to participate as something more than reusers of generative AI tools created by others will need at least some proficiency in prompt engineering, retrieval augmented generation, fine-tuning, and other topics. I agree that smaller models running locally is where this all needs to go eventually, which means additional understanding will be needed in techniques like quantizing, pruning, and distilling the knowledge of larger models into smaller ones so these models can fit (and run) on edge devices like consumer laptops and phones. 

There are strong analogs between the revise and remix potentials created by openly licensed content and the revise and remix potentials created by openly licensed model weights. And the overall educational potential is far greater for open weights than open content. But without some baseline understanding of how generative AI works it will be difficult to participate (productively) in these kinds of conversations. It looks like we might have another decade of dry, technical, arcane professional development workshops ahead of us. :)

This is some of the territory I'm going to cover in the talk in a couple of weeks.

Stephen Downes weighed in with a post on his blog entitled What is the Soul of Open Education?.

I've had my disagreements with Wiley over the years but we are in agreement on this point. Now what it means to say "increase access to educational opportunity" may be another point of contention; creating startups and making money isn't my idea of progress. But we agree on the potential of AI.....

If it takes (AI) a fraction of the resources it used to take to create a useful and usable OER, even if it has to be corrected for misrepresentation, then there is far more opportunity for people in under-represented groups to crate resources where they see themselves reflected in the materials being used in learning. AI-assisted transcription and translation, resource recommendation, community formation and more can also help members of marginalized groups.

There were many more contributions and I am sure we have only seen the start of this debate. But it seems a very important one for the future of Open Education and for Open Education practitioners wrestling with AI.

More to follow.

The Creative Commons Open Education Platform is a space for open education advocates and practitioners to identify, plan and coordinate multi-national open education content, practices and policy activities to foster better sharing of knowledge.

This platform is open to all interested people working in open education.

You can join the email list at cc-openedu [at] googlegroups [dot] com