Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

I very much like this blog post, A Compassionate Approach to AI in Education, by Maha Bali from the American University in Cairo. Maha explains where she is coming from. And she addresses ethics, not from the point of an abstract ethical framework, of which we have many at the moment, but from the point of ethical practice. What follows is a summary but please read the whole blog

The article discusses the challenges and opportunities that generative artificial intelligence (AI) presents in education, from the viewpoint of a teacher and researcher deeply involved with educators worldwide through these changes. She emphasises a feminist approach to education, centered on socially just care and compassionate learning design, which critically examines the inequalities and biases exacerbated by AI technologies. The article is structured around four key strategies for educators and learners to adapt and respond to AI's impact:

  1. Critical AI Literacy: Developing an understanding of how AI operates, especially machine learning, is fundamental. Educators and students must grasp how AI outputs are generated, how to judge their quality, and where biases might be embedded. Training data for AI, often dominated by Western, white, and male perspectives, can reinforce existing inequalities, particularly affecting underrepresented groups. The author provides an example where an AI tool incorrectly associated an Egyptian leader with an unrelated American figure, highlighting the importance of recognising biases and inaccuracies. The global South is often underrepresented in training data, and the AI workforce is predominantly male, which can discourage women from pursuing technical skills.
  2. Appropriate AI Usage: While some AI uses have proven beneficial, such as medical diagnostics and accessibility features for visually impaired people, educators must distinguish when its application could be harmful or unethical. AI's biases and limitations mean it should not be relied upon for personalised learning or critical assessments. The EU has identified high-risk AI applications that require careful regulation, including facial recognition and recruitment systems. In educational settings, AI should not replace human judgment in crucial evaluations, and the emotional aspects of learning should not be overlooked.
  3. Inclusive Policy Development: Students should be actively involved in shaping AI policies and guidelines within classrooms and institutions. The author suggests using metaphors to help learners understand when AI is appropriate, comparing it to baking a cake. For instance, sometimes students need to bake a cake from scratch (doing all work without AI), while other times, they can use pre-made mixes (using AI as a starting point) or purchase a cake (fully using AI). By having these discussions, students understand the purpose of assignments and when AI can enhance or detract from learning outcomes.
  4. Preventing Unauthorized AI Use: Understanding why students might be tempted to use AI unethically is critical. Students often misuse AI due to tight deadlines, lack of interest or understanding in assignments, lack of confidence in their abilities, and competitive educational environments. The author advocates for empathetic listening, flexible deadlines, and creative assignments that encourage genuine engagement. Moreover, fostering a supportive classroom community can reduce competitiveness and emphasise collaborative learning over competition.

The article encourages a compassionate, critical approach to AI in education. By understanding the biases embedded in AI, developing critical AI literacy, and involving students in policy-making, educators can ensure that students ethically and effectively use AI tools. This approach aims to empower learners to shape future AI platforms and educational systems that are socially just and inclusive.

Leave a Reply

Your email address will not be published. Required fields are marked *