Teacher’s Digital Literacy

Nacho Kamenov & Humans in the Loop / Better Images of AI / A trainer instructing a data annotator on how to label images / CC-BY 4.0

This definition of AI literacy for teachers was posted on Linked in by Fenchung Miao, Chief, Unit for Technology and AI in Education at UNESCO.

  1. Cultivate a critical view that AI is human led and the corporate and individual decision of AI creators have profound impact on human autonomy an rights, becoming aware of the importance of human agency when evaluating and using AI tools.
  2. Develop a basic understanding on typical ethical issues related to AI and acquire basic knowledge on ethical principles for human / AI interactions, including protection of human rights and human agency, promotion of linguistic and cultural diversity and advocating for inclusion and environmental sustainability.
  3. Acquire basic conceptual knowledge on AI, including the definition of AI, basic knowledge on how an AI model is trained and associated knowledge on data and algorithm, main categories and examples of AI technologies, as well as basic skills on examining appropriateness of specific AI tools for education and operational skills of validated AI tools.
  4. Identify and leverage the pedagogical benefits of AI tools to support subject specific lesson planning, teaching and assessments.
  5. Explore the useoif AI tools to enhance their professional learning and reflective practices, supporting assessment of learning needs and personal learning pathways in the rapidly evolving educational landscape.

AI Pioneers Action Lab at the EDEN Digital Learning Europe Conference in Graz

Last week at the EDEN Digital Learning Europe Conference in Graz. The AI Pioneers project organised an Action Lab. I wasn't quite sure what the idea of an Action Lab was but we agreed that it seemed to imply some involvement and participation from those attending. Neither did we know exactly how many would attend given that as in other large conferences their were a considerable number of parallel sessions.

So - in the end we decided we would deliver a very short presentation as an introduction to the AI pioneers project and then split into groups to discuss three key questions:

  • The skills and competences required for using Generative AI for teaching and learning
  • The impact of Generative AI on education
  • Guidelines for future projects on using AI for learning in education

In the end nearly 30 people turned up - to a room which was better suited to 15! And we only had one hour for the Action Lab so it was quick fire stuff. Originally we had through about using a Miro board for collecting inputs but there was not enough time for that. Instead we went back to basics and collected the replies on post-it notes displayed on posters on the wall. We captured posters through mobile phone pictures.

The session was lively and a lot of fun (at least I thought so and others I talked to afterwards agreed. Of course a usual problem with a set up like this is how to record and analyse the discussion afterwards. No problems - since this is about AI I gave to to Claude 3.5 Sonnet with the following prompt:

Those stickers are responses in the question about the impact of AI in education from a workshop about AI in education. Can you extract the text from the stickers. Can you analyse and conclude ?

Here's what it came back with.

Skills and competences required for using Generative AI for teaching and learning:

  1. What do you want to achieve? WEIRD prompts
  2. GenAI can help build skills
  3. Critical analysis of outputs + ethics
  4. Domain understanding
  5. Basic computer skills
  6. TPAK (Technological Pedagogical Content Knowledge)
  7. Critical thinking and ethical skills
  8. Design documents and use appropriate AI skills, take notes to AI tools
  9. Critical reflection of output from AI - evaluative judgment
  10. Agility, willingness to explore and change
  11. Prompt design
  12. Level up pedagogical skills
  13. Play with AI, Explore, Experiment, Engage
  14. Learn how to prompt correctly

The impact of Generative AI on education:

  1. Consider how learning is assessed and how we define success
  2. Collaborative learning initiatives
  3. Leveling up of some skills or knowledge / language
  4. Reduction in the value/need for qualifications for jobs
  5. Possible increase of behaviorism-based learning
  6. Effective tool SaaS may no longer exist
  7. Opportunities to extend through AI
  8. Personalization of learning
  9. Amplifier for re-thinking Education - the way we teach
  10. Dropping old skills, developing new skills
  11. Disruption in trust
  12. Impact on our cognitive development
  13. Role in historical narratives + trust + misinformation
  14. Impact on staff workload/training on job
  15. Risks to assessment - cheating
  16. Danger of partial understanding AI as a crutch
  17. NO IMPACT!

Guidelines for future projects on using AI for learning in education:

  1. Research informed practice
  2. Co-design with students/teachers + partners
  3. Authentic Assessment
  4. Ethical Proofing
  5. AI taxonomies
  6. digital poverty
  7. Focus on innovation equity
  8. Preserve the values of education and its expertise
  9. Custom small LLMs for customized content exploration
  10. Include nudges to help people be better/happier/healthier
  11. AI as Co-Teacher + Blended Learning + Flippers
  12. AI Tutor for AI National Network of Hub AI Play fps, Research Financial, Pedagogy for all
  13. Accountability Buddy
  14. Project & support building custom kits / APIs for use cases

Analysis and Conclusion:

The sticky notes reveal a comprehensive exploration of AI's role in education, covering guidelines for implementation, necessary skills, and potential impacts. Here's an analysis of the key themes:

  1. Ethical Considerations: There's a strong emphasis on ethical implementation, including "Ethical Proofing" and preserving educational values. This suggests a cautious approach to AI adoption.
  2. Skill Development: Both educators and students need to develop new skills, such as prompt design, critical analysis of AI outputs, and understanding AI's capabilities and limitations.
  3. Personalization and Collaboration: AI is seen as a tool for personalizing learning experiences and fostering collaborative initiatives.
  4. Pedagogical Shifts: The notes indicate that AI may lead to changes in teaching methods, assessment practices, and the overall educational paradigm.
  5. Equity and Accessibility: There's concern about digital poverty and ensuring equitable access to AI-enhanced education.
  6. Potential Risks: Some notes highlight risks such as over-reliance on AI, impacts on trust, and potential for cheating.
  7. Workforce Implications: AI's impact extends beyond the classroom, potentially affecting job qualifications and workforce skills.

Conclusion: The overall sentiment towards AI in education is cautiously optimistic. While there's excitement about AI's potential to enhance learning experiences and personalize education, there's also a clear recognition of the challenges and risks involved. The focus is on responsible implementation, emphasizing the need for ethical guidelines, skill development, and maintaining human-centric values in education. The diverse perspectives represented in these notes suggest that the integration of AI in education will require careful planning, continuous assessment, and adaptability to ensure it benefits all stakeholders while mitigating potential drawbacks.

Are GenAI codes of ethics dangerous?

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Safety Precautions / CC-BY 4.0

Last week I was at the EDEN Digital Learning Europe Conference in Graz. Sadly difficulties in transport limited by time and I missed the second day of the conference. But I am copying this report on LinkedIn by Felix Kwihangana from the University of Manchester of the EDEN Oxford Debate, where he supported the motion “Codes of Ethics for Generative Artificial Intelligence in Education are Useless/ Toothless/ A waste of time, chaired by Paul Prinsloo.

The debate provided an opportunity to explore and interrogate the complex issues around Ethical guidelines for hashtag#Generative hashtag#AI in education, their merits and demerits and everything in-between. I was privileged to work with Elisabeth Anna Guenther and Dr Martina Plantak in supporting the motion against an impressive team of debaters (Eamon Costello, Victoria Marín and Palitha Edirisingha).

In supporting the motion, we argued that hashtag#GenAI ethical guidelines in HE are often reactive, exclusive of non-western ways of knowing, based on a limited understanding of Generative AI, becoming obsolete before they are enacted due to the speed at which Generative AI is developing, and used as virtue signalling tools by institutions motivated by maintaining control rather than encouraging exploration and discovery. Using some historical cases (Alan Turing prosecution, The Prohibition), we argued that the ever changing values of society and the fast pace of Generative AI development could make Generative AI codes of ethics not only useless but also dangerous, when looked at within the historical lens of damage done in the name of "ethics", "values" and "norms" that societies end up dropping anyway. Needless to say, the opposing team had equally strong counterarguments, which made the debate worth its name! 

Student perceptions of generative AI

Photo by Annie Spratt on Unsplash

As promised this is the next in a short series of posts looking at students' perception and use of generative AI. Last year the UK Jisc published a report, 'Student Perceptions of Generative AI' while recognising the need to continue the discussion with students/learners as the technology continues to evolve.
Over this past winter, they ran a series of nine in-person student discussion forums with over 200 students across colleges and universities to revisit student/learner perceptions of generative AI. Their goal. they say, was to "understand if and how views on generative AI have shifted, identify emerging usage and concerns, and explore the developing role students/learners want these tools to play in their educational experience. An updated version of the report was published in May of this year. In the introduction the report outlines the key changes since Spring, 2023.

The adoption of generative AI in education by students/learners is undergoing a remarkable transformation, mirroring the rapid evolution of the technology itself. Over the span of just nine months, since our previous report we have seen a distinct change in how students are utilising generative AI, and a maturing expectation of their institutions to support them in their journey into employment in an AI enabled world.

Transition to Collaborative Learning: Students/Learners increasingly view generative AI as a collaborative tool to coach and support active learning and critical thinking, using these tools as a digital assistant rather than seeing them purely as answer providers.

Emphasis on Future Skills: Students/Learners emphasised the importance of generative AI-ready skills relevant to their future industries. There’s a growing demand for an education to integrate generative AI across the curriculum and reflect the AI enabled world we all now inhabit.

Ethics, Equity, and Accessibility Concerns: Students/Learners are increasingly aware of and concerned about equity, bias, and accessibility issues related to AI, advocating for measures that address these challenges to ensure a safe, inclusive, and responsive educational experience.

Comprehensive Integration and Educator Competence: There’s a clear expectation by students/learners for comprehensive generative AI integration across education, with competent usage by educators and policies that ensure a fair and effective AI-enhanced learning environment.

The report is relatively short, well produced and easy to read. It concludes with the need for Institutions to respond to evolving student/learner needs and concerns.

Students/Learners have clearly articulated the need for comprehensive support from their institutions, including access to generative AI tools that cater to a wide range of needs, the development of critical information literacy skills, and guidance on ethical use to ensure academic integrity and intellectual development.

The importance of preparing students/learners for the evolving generative AI influenced job market is also becoming increasingly clear. Incorporating relevant generative AI skills and knowledge into curricula is essential for keeping up with technological advancements and preparing them for future challenges.


Is Generative AI just a hype?

Amritha R Warrier & AI4Media / Better Images of AI / tic tac toe / CC-BY 4.0

A new study into the use of generative AI has been published by the Reuters Institute and Oxford University. The study, "What does the public in six countries think of generative AI in news?", looks at if and how people use generative artificial intelligence (AI), and what they think about its application in journalism and other areas of work and life across six countries/

Researchers surveyed 12,000 people in six countries. The data were collected by YouGov using an online questionnaire fielded between 28 March and 30 April 2024 in Argentina, Denmark, France, Japan, the UK, and the USA.

The survey fund that ChatGPT is by far the most widely used generative AI tool in the six countries surveyed. Use of ChatGPT is roughly two or three times more widespread than the next products, Google Gemini and Microsoft Copilot. But even when it comes to ChatGPT, frequent use is rare, with just 1% using it on a daily basis in Japan, rising to 2% in France and the UK, and 7% in the USA. Many of those who say they have used generative AI have only used it once or twice, and it is yet to become part of people’s routine internet use. However, they found young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech.

The research indicates that, for all the money and attention lavished on generative AI, it is yet to become part of people’s routine internet use.

"Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," the report's lead author told the BBC.

Dr Fletcher said people’s hopes and fears for generative AI vary a lot depending on the sector.

People are generally optimistic about the use of generative AI in science and healthcare, but more wary about it being used in news and journalism, and worried about the effect it might have on job security.