TeacherMatic

The AI pioneers project which is researching an developing approaches to the use of AI in vocational and adult education in Europe is presently working on a Toolkit including analysis of a considerable number of AI tools for education. Indeed a problem is that so many new tools and applications are being released it is hard for organisations to know what they should be trying out.

In the UK, JISC has been piloting and evaluating a number of different applications and tools in vocational colleges. Their latest report is about TeacherMatic which appears to be adapted in many UK Further Education Colleges. TeacherMatic is a generative AI-powered platform tailored for educators. It provides an extensive toolkit featuring more than 50 innovative tools designed to simplify the creation of educational content. These tools help in generating various teaching aids, such as lesson plans, quizzes, schemes of work and multiple-choice questions, without users needing to have expertise in prompt engineering. Instead, educators can issue straightforward instructions to produce or adapt existing resources, including presentations, Word documents, and PDFs. The main goal of TeacherMatic, the developers say, is to enhance teaching efficiency and lighten educators’ workloads. To allow teachers to dedicate more time to student interaction and less to repetitive tasks.

For the pilot, each participating institution received 50 licenses for 12 months, enabling around 400 participants to actively engage with and evaluate the TeacherMatic platform.

The summary of the evaluation of the pilot is as follows.

The pilot indicates that TeacherMatic can save users time and create good quality resources. Participants commended the platform for its ease of use, efficient content generation, and benefits to workload. Feedback also highlighted areas for improvement and new feature suggestions which the TeacherMatic team were very quick to take on board and where possible implement.

Participants found TeacherMatic to be user-friendly, particularly praising its easy-to-use interface and simple content generation process. The platform was noted for its instructional icons, videos, and features such as Bloom’s taxonomy, which assists in creating educational content efficiently. However, suggestions for enhancements include the ability to integrate multiple generators into a single generator. It also remains essential for users to evaluate the generated content, ensuring it is suitable and accessible to the intended audience.

TeacherMatic was well-received across institutions, for its capabilities, especially beneficial for new teaching staff and those adapting to changing course specifications. Feedback showed that TeacherMatic is particularly valuable for those previously unfamiliar with generative AI. Pricing was generally seen as reasonable, aligning with most participants’ expectations.

TeacherMatic has been well-received, with a majority of participants recognising its benefits and expressing a willingness to continue using and recommending the tool.

AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.

AI: How do we make explainability open to practitioners and practice?

Over the last six or so weeks I have been doing a series of interviews about AI in education for the AI Pioneers project. One issue was who to ask. Just about everyone interested in the future use of technology for education has something to say about AI right now. I stumbled towards two main criteria. Firstly to ask people I had worked with before and who I valued their opinions. And second I wanted to include people who had worked on continuing professional development for teachers and trainers. At the end of the day, the interviews, together with a survey, form the main part of a Work Package in the AI pioneers project looking at the competences required by teachers and trainers for working with AI, with an objective of extending the EU DigCompEdu framework.

This week I am going to publish the (edited) transcripts of four of the interviews. I will follow this up next week with comments on some of what I think are the major issues, together with a podcast where you can listen to ideas from those interviewed in their own voice.

The first interview is with Helen Beetham.

Helen Beetham is an experienced consultant, researcher and educator, based in the UK, and working mainly in the field of digital education in the university sector. Publications include: Rethinking pedagogy for a digital age (Routledge 2006, 2010 and 2019, with Rhona Sharpe), Rethinking learning for a digital age, numerous book chapters and peer reviewed academic papers, including recently an edited Special issue of Learning, Media and Technology (2022). Current research centres on critical pedagogies of technology, and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

She has advised global universities and international bodies on their digital education strategies, and led invited workshops at over 40 universities around the world as well as working on the development of DigCompEdu. Her Digital Capabilities framework is widely used in UK Education, in Health Education, and in other national education systems.

Helen went to university to study AI! She is currently writing up her research into digital criticality in education and writing a substack, 'Imperfect Offerings', focused on the challenges of Generative AI.

Digital Literacy, Capabilities and Ethics

Helen explained that her work on digital Literacy was based on a Capabilities Framework intended as a “steady sate” but could be updated to address changing technologies. She said that “even more than most, teachers and trainers do not have digital capabilities and that technologies have to be available without being subjected to big technology companies. They need online materials and learning routes and pathways, not just imposed from above. Who can build Foundation Models she asked given their complexity and cost. There has to be choices for colleges and collectives.  Education needs its own Language Models, Foundational models and data. AI cannot be ethical if there is no choice.

This means we need capability in the education sector to build ethical versions of Foundational Models. Although she has always considered pedagogy to come first in the use of technology for teaching and learning, today we need technical models within the education sectors. We need to stop the brain drain from the sector and mobilise the community for development. We can reduce the risk of AI through using open development models.

Helen asked what it means to be ethical. She pointed to other development in education such as the campaign to decolonize education.

Conversations and Practice

The contribution of Frameworks like The Capability Framework or DigCompEdu is that they lead to conversations and those conversations lead to changing practice. Initiatives from bodies like Jisc and the European Union create initiatives and spaces for those conversations.

Explainability

Can we open the black box around Generative AI, she asked. Expainability is a good goal, but how do we make explainability open to practitioners and practice? She has been trying to develop explainability through the labour situation and the restructuring of employment through AI.

Further need for explainability relates to the meaning of words and concepts like learning, intelligence and training. All the models claim to use these processes but there is a need to explain just what they mean in developing Foundational Models and how such processes are allied in AI.

Wikipedia, weightings and AI

In a wide ranging interview another issue was the use of Wikipedia in large language models. How were weighting derived for data from Wikipedia and was in fact Wikipedia over-influential in the language models. What should Wikipedia do now – should it develop its own AI powered front end.

Future of employment

Looking at the future of employment it seems likely that fewer skills may be needed to do the same things that were undertaken by skilled workers prior to AI. Yet universities are assuming that they can train students to take up high level roles that GPT cannot touch. Yet it is these roles that automation is reducing with increasing impact on the professional middle classes. It seems more likely that GPT automation will have less affect on technical skilled work, especially those tasks which require unpredictable activities in less certain contexts – jobs that are based on vocational education and training.

You can subscribe to Helen Beetham's substack, 'Imperfect Offerings' at https://helenbeetham.substack.com/.