A quick post in follow up to my article yesterday on the proposals by the UK Department for Education to commission tech companies to develop an AI app for teachers to save them time. The Algorithm - a newsletter from MIT Technology Review picked up on this today, saying "this year, more and more educational technology companies are pitching schools on a different use of AI. Rather than scrambling to tamp down the use of it in the classroom, these companies are coaching teachers how to use AI tools to cut down on time they spend on tasks like grading, providing feedback to students, or planning lessons. They’re positioning AI as a teacher’s ultimate time saver."
The article goes on to ask how willing teachers are to turn over some of their responsibilities to an AI model? The answer, they say, really depends on the task, according to Leon Furze, an educator and PhD candidate at Deakin University who studies the impact of generative AI on writing instruction and education.
“We know from plenty of research that teacher workload actually comes from data collection and analysis, reporting, and communications,” he says. “Those are all areas where AI can help.”
Then there are a host of not-so-menial tasks that teachers are more skeptical AI can excel at. They often come down to two core teaching responsibilities: lesson planning and grading. A host of companies offer large language models that they say can generate lesson plans that conform to different curriculum standards. Some teachers, including in some California districts, have also used AI models to grade and provide feedback for essays. For these applications of AI, Furze says, many of the teachers he works with are less confident in its reliability.
Companies promising time savings for planning and grading “is a huge red flag, because those are core parts of the profession,” he says. “Lesson planning is—or should be—thoughtful, creative, even fun.” Automated feedback for creative skills like writing is controversial too. “Students want feedback from humans, and assessment is a way for teachers to get to know students. Some feedback can be automated, but not all.”
Last week the new UK government announced a new project that they say will enhance AI's ability to assist teachers in marking work and planning lessons.
The press release says:
Teaching standards, guidelines and lesson plans will form a new optimised content store which will train generative AI to make it more reliable for teachers in England
new project will bring teachers and tech companies together to develop and use trustworthy AI tools that can help mark homework and save teachers time
comes as new research shows parents want teachers to use AI to reduce out of hours work and boost time spent teaching children
The government is investing £4 million in the project to pool government documents including curriculum guidance, lesson plans and anonymised pupil assessments which will then be used by AI companies to train their tools so they generate accurate, high-quality content, like tailored, creative lesson plans and workbooks, that can be reliably used in schools.
The content store, they say, is targeted at technology companies specialising in education to build tools which will help teachers mark work, create teaching materials for use in the classroom and assist with routine school admin.
There is not unanimous support for the announcement. UK teachers have been protesting about high workloads over a prolonged period of time, with substantial numbers leaving the profession. And amongst the flood of AI releases targeted at education, tools like teachermatic to support teachers have been relatively successful in the UK. But concerns include giving more funding and ultimately power to teh tech industry as well as providing them with student data, even if anonymized. Another question is whether the development of AI based on a national curriculum (and it is important to remember that Wales and Scotland have separate and different curricula) may lead towards an overly centralised curriculum, with AI providing less diverse learning materials.
An intense debate has opened up on the Creative Commons Open Education email list. This extends discussions which have been brewing for some time about whether Open Education practitioners should support or fight against Large Language Model developers scraping web publications without either attribution or positive permissions for training data for Gen AI.
This week the debate heated up following the advertisement of a webinar featuring a presentation by Dave Wiley:
The University of Regina's OEP Program invites you to a special online presentation by Dr. David Wiley. Dr. Wiley is widely recognized as one of the founders of and key thinkers surrounding the open movement in education.
Date: Thursday September 19, 2024
Abstract:
For over 25 years, the primary goal of the open education movement has been increasing access to educational opportunity. And from the beginning of the movement the primary tactic for accomplishing this goal has been creating and sharing OER. However, using generative AI is a demonstrably more powerful and effective way to increase access to educational opportunity. Consequently, if we are to remain true to our overall goal, we must begin shifting our focus from OER to generative AI.
There was near instant kickback on the list. Heather Ross wrote:
I’m really troubled by so many in the open movement seeing GenAI as a natural fit with OER. OER aligns with several of the UN SDGs and is being used to integrate sustainability into curriculum, teaching about how all disciplines are tied to the SDGs. GenAI is an environmental nightmare. OER is being used to integrate EDI and Indigenization into curriculum. GenAI, programmed by those of dominant groups, often fails to represent or misrepresents members of marginalized communities. Taking what isn’t yours to create something new without giving credit, having permission, or considering the impact on others isn’t innovation or acting in the spirit of open. It’s colonization. OER has always called for recognition of the work’s creators and contributors and gratitude for their willingness to share it openly. Any gratitude toward GenAI-created work that was taught on copyrighted works against the copyright holder’s permission will ring hollow. During my comprehensive exam, a committee member asked me what the difference between OER and Napster was. At the time, that was easy to answer. Most OER was created by authors who willingly released their work with an open license. Napster was the sharing of music without the artist’s permission. If I were asked that question now, it would be a lot harder to answer.
And Dave Wiley came back to say:
It feels like we spent the second full decade of the OER movement, from 2008 - 2018, running non-stop workshops about copyright and the Creative Commons licenses. We had to spend ten years that way because there are certain fundamentals about copyright and licensing that a person has to understand before they can participate in the OER movement in a way that goes beyond reusing content created by others. The same is true for generative AI. People who want to participate as something more than reusers of generative AI tools created by others will need at least some proficiency in prompt engineering, retrieval augmented generation, fine-tuning, and other topics. I agree that smaller models running locally is where this all needs to go eventually, which means additional understanding will be needed in techniques like quantizing, pruning, and distilling the knowledge of larger models into smaller ones so these models can fit (and run) on edge devices like consumer laptops and phones. There are strong analogs between the revise and remix potentials created by openly licensed content and the revise and remix potentials created by openly licensed model weights. And the overall educational potential is far greater for open weights than open content. But without some baseline understanding of how generative AI works it will be difficult to participate (productively) in these kinds of conversations. It looks like we might have another decade of dry, technical, arcane professional development workshops ahead of us. :) This is some of the territory I'm going to cover in the talk in a couple of weeks.
I've had my disagreements with Wiley over the years but we are in agreement on this point. Now what it means to say "increase access to educational opportunity" may be another point of contention; creating startups and making money isn't my idea of progress. But we agree on the potential of AI.....
If it takes (AI) a fraction of the resources it used to take to create a useful and usable OER, even if it has to be corrected for misrepresentation, then there is far more opportunity for people in under-represented groups to crate resources where they see themselves reflected in the materials being used in learning. AI-assisted transcription and translation, resource recommendation, community formation and more can also help members of marginalized groups.
There were many more contributions and I am sure we have only seen the start of this debate. But it seems a very important one for the future of Open Education and for Open Education practitioners wrestling with AI.
More to follow.
The Creative Commons Open Education Platform is a space for open education advocates and practitioners to identify, plan and coordinate multi-national open education content, practices and policy activities to foster better sharing of knowledge.
This platform is open to all interested people working in open education.
There is growing interest in using and developing Open Source Software approaches to Generative AI for teaching and learning in education. And there are an explosion of models claiming to be Open Source (see, for example Hugging Face). But Gen AI is a new form of software and there has been difficulties on agreeing what a definition is. This week the Open Source Initiative has released a draft definition.
In the preamble they explain why it is important.
Open Source has demonstrated that massive benefits accrue to everyone when you remove the barriers to learning, using, sharing and improving software systems. These benefits are the result of using licenses that adhere to the Open Source Definition. The benefits can be summarized as autonomy, transparency, frictionless reuse, and collaborative improvement.
Everyone needs these benefits in AI. We need essential freedoms to enable users to build and deploy AI systems that are reliable and transparent.
The following text is taken from their website.
What is Open Source AI
When we refer to a “system,” we are speaking both broadly about a fully functional structure and its discrete structural elements. To be considered Open Source, the requirements are the same, whether applied to a system, a model, weights and parameters, or other structural elements.
An Open Source AI is an AI system made available under terms and in a way that grant the freedoms[1] to:
Use the system for any purpose and without having to ask for permission.
Study how the system works and inspect its components.
Modify the system for any purpose, including to change its output.
Share the system for others to use with or without modifications, for any purpose.
These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system.
Preferred form to make modifications to machine-learning systems
The preferred form of making modifications to a machine-learning system is:
Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.
For example, if used, this would include the training methodologies and techniques, the training data sets used, information about the provenance of those data sets, their scope and characteristics, how the data was obtained and selected, the labeling procedures and data cleaning methodologies.
Code: The source code used to train and run the system, made available with OSI-approved licenses.
For example, if used, this would include code used for pre-processing data, code used for training, validation and testing, supporting libraries like tokenizers and hyperparameters search code, inference code, and model architecture.
Weights: The model weights and parameters, made available under OSI-approved terms[2].
For example, this might include checkpoints from key intermediate stages of training as well as the final optimizer state.
Open Source models and Open Source weights
For machine learning systems,
An AI model consists of the model architecture, model parameters (including weights) and inference code for running the model.
AI weights are the set of learned parameters that overlay the model architecture to produce an output from a given input.
The preferred form to make modifications to machine learning systems also applies to these individual components. “Open Source models” and “Open Source weights” must include the data information and code used to derive those parameters.
Of course this is only a draft and there will be disagreements. A particularly tricky issue is whether Large Language Models should be allowed to be trained from data scraped from the web without permission or attribution.
Following extensive expert consultations and discussions with parliamentarians, UNESCO have released a consultation paper in English for public consultation on AI governance..
UNESCO encourages stakeholders, including parliamentarians, legal experts, AI governance experts and the public, to review, and provide feedback on the different regulatory approaches for AI. You can read the consultation paper here.
The Consultation Paper on AI Regulation is part of a broader effort by UNESCO, Inter-Parliamentary Union and Internet Governance Forum’s Parliamentary Track to engage parliamentarians globally and enhance their capacities in evidence-based policy making for AI.
The Paper has been developed through:
Literature review on AI regulation in different parts of the world.
A discussion on “The impact of AI on democracy, human rights and the rule of law” with parliamentarians from around the world at the IPU Assembly in Geneva, 23-27 March 2024.
Capacity building workshop co-designed and co-facilitated by UNESCO on 25 March 2024 at the IPU in Geneva and three webinars on the subject that were organized by IPU, UNESCO and the Internet Governance Forum (IGF) for parliamentarians to inform the development of the discussion paper.
Discussion with Members of Parliament at the Regional Summit of Parliamentarians on Artificial Intelligence in Latin America held in Buenos Aires on 13 and 14 June 2024.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.