Are GenAI codes of ethics dangerous?

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Safety Precautions / CC-BY 4.0

Last week I was at the EDEN Digital Learning Europe Conference in Graz. Sadly difficulties in transport limited by time and I missed the second day of the conference. But I am copying this report on LinkedIn by Felix Kwihangana from the University of Manchester of the EDEN Oxford Debate, where he supported the motion “Codes of Ethics for Generative Artificial Intelligence in Education are Useless/ Toothless/ A waste of time, chaired by Paul Prinsloo.

The debate provided an opportunity to explore and interrogate the complex issues around Ethical guidelines for hashtag#Generative hashtag#AI in education, their merits and demerits and everything in-between. I was privileged to work with Elisabeth Anna Guenther and Dr Martina Plantak in supporting the motion against an impressive team of debaters (Eamon Costello, Victoria Marín and Palitha Edirisingha).

In supporting the motion, we argued that hashtag#GenAI ethical guidelines in HE are often reactive, exclusive of non-western ways of knowing, based on a limited understanding of Generative AI, becoming obsolete before they are enacted due to the speed at which Generative AI is developing, and used as virtue signalling tools by institutions motivated by maintaining control rather than encouraging exploration and discovery. Using some historical cases (Alan Turing prosecution, The Prohibition), we argued that the ever changing values of society and the fast pace of Generative AI development could make Generative AI codes of ethics not only useless but also dangerous, when looked at within the historical lens of damage done in the name of "ethics", "values" and "norms" that societies end up dropping anyway. Needless to say, the opposing team had equally strong counterarguments, which made the debate worth its name! 

Student perceptions of generative AI

Photo by Annie Spratt on Unsplash

As promised this is the next in a short series of posts looking at students' perception and use of generative AI. Last year the UK Jisc published a report, 'Student Perceptions of Generative AI' while recognising the need to continue the discussion with students/learners as the technology continues to evolve.
Over this past winter, they ran a series of nine in-person student discussion forums with over 200 students across colleges and universities to revisit student/learner perceptions of generative AI. Their goal. they say, was to "understand if and how views on generative AI have shifted, identify emerging usage and concerns, and explore the developing role students/learners want these tools to play in their educational experience. An updated version of the report was published in May of this year. In the introduction the report outlines the key changes since Spring, 2023.

The adoption of generative AI in education by students/learners is undergoing a remarkable transformation, mirroring the rapid evolution of the technology itself. Over the span of just nine months, since our previous report we have seen a distinct change in how students are utilising generative AI, and a maturing expectation of their institutions to support them in their journey into employment in an AI enabled world.

Transition to Collaborative Learning: Students/Learners increasingly view generative AI as a collaborative tool to coach and support active learning and critical thinking, using these tools as a digital assistant rather than seeing them purely as answer providers.

Emphasis on Future Skills: Students/Learners emphasised the importance of generative AI-ready skills relevant to their future industries. There’s a growing demand for an education to integrate generative AI across the curriculum and reflect the AI enabled world we all now inhabit.

Ethics, Equity, and Accessibility Concerns: Students/Learners are increasingly aware of and concerned about equity, bias, and accessibility issues related to AI, advocating for measures that address these challenges to ensure a safe, inclusive, and responsive educational experience.

Comprehensive Integration and Educator Competence: There’s a clear expectation by students/learners for comprehensive generative AI integration across education, with competent usage by educators and policies that ensure a fair and effective AI-enhanced learning environment.

The report is relatively short, well produced and easy to read. It concludes with the need for Institutions to respond to evolving student/learner needs and concerns.

Students/Learners have clearly articulated the need for comprehensive support from their institutions, including access to generative AI tools that cater to a wide range of needs, the development of critical information literacy skills, and guidance on ethical use to ensure academic integrity and intellectual development.

The importance of preparing students/learners for the evolving generative AI influenced job market is also becoming increasingly clear. Incorporating relevant generative AI skills and knowledge into curricula is essential for keeping up with technological advancements and preparing them for future challenges.


TeacherMatic

The AI pioneers project which is researching an developing approaches to the use of AI in vocational and adult education in Europe is presently working on a Toolkit including analysis of a considerable number of AI tools for education. Indeed a problem is that so many new tools and applications are being released it is hard for organisations to know what they should be trying out.

In the UK, JISC has been piloting and evaluating a number of different applications and tools in vocational colleges. Their latest report is about TeacherMatic which appears to be adapted in many UK Further Education Colleges. TeacherMatic is a generative AI-powered platform tailored for educators. It provides an extensive toolkit featuring more than 50 innovative tools designed to simplify the creation of educational content. These tools help in generating various teaching aids, such as lesson plans, quizzes, schemes of work and multiple-choice questions, without users needing to have expertise in prompt engineering. Instead, educators can issue straightforward instructions to produce or adapt existing resources, including presentations, Word documents, and PDFs. The main goal of TeacherMatic, the developers say, is to enhance teaching efficiency and lighten educators’ workloads. To allow teachers to dedicate more time to student interaction and less to repetitive tasks.

For the pilot, each participating institution received 50 licenses for 12 months, enabling around 400 participants to actively engage with and evaluate the TeacherMatic platform.

The summary of the evaluation of the pilot is as follows.

The pilot indicates that TeacherMatic can save users time and create good quality resources. Participants commended the platform for its ease of use, efficient content generation, and benefits to workload. Feedback also highlighted areas for improvement and new feature suggestions which the TeacherMatic team were very quick to take on board and where possible implement.

Participants found TeacherMatic to be user-friendly, particularly praising its easy-to-use interface and simple content generation process. The platform was noted for its instructional icons, videos, and features such as Bloom’s taxonomy, which assists in creating educational content efficiently. However, suggestions for enhancements include the ability to integrate multiple generators into a single generator. It also remains essential for users to evaluate the generated content, ensuring it is suitable and accessible to the intended audience.

TeacherMatic was well-received across institutions, for its capabilities, especially beneficial for new teaching staff and those adapting to changing course specifications. Feedback showed that TeacherMatic is particularly valuable for those previously unfamiliar with generative AI. Pricing was generally seen as reasonable, aligning with most participants’ expectations.

TeacherMatic has been well-received, with a majority of participants recognising its benefits and expressing a willingness to continue using and recommending the tool.

Generative AI, Assessment and the Future of Jobs and Careers

Ten days ago, I was invited to make an online presentation as part of a series on AI for teachers and researchers in Kazakhstan. I talked with the organisers and they asked me if I could speak about AI and Assessment and AI and Careers. Two subjects seemed hard to me but I prepared presentation linking them together and somehow it made sense. The presentation was using a version of Zoom I had not seen before to enable interpretation. My slides were translated into Russian. This was a little stressful as I was changing the slides in Russian online and in English on a laptop at the same time. It was even more stressful that my TP Link to the internet went down after two minutes and I had to change room to get better connectivity!

Anyway, it seemed to go well and there were good questions from the audience of about 150. Given that the recording was in Russian, I made a new English version. We still experimenting with the best way to do an audio track over slide decks and provide a Spanish translation so sorry that some of these slides are not perfect. But I hope you get the message.

Delving into a chat

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Humans Do The Heavy Data Lifting / CC-BY 4.0

GPT 4 is quite useful for some things. I have been developing four Open Educational Resources around Labour Market Information, designed for careers professionals in different European countries. I was asked to include Issues for Reflection and a short multiple choice quiz on each of the OERs. I fed GPT4 the content of each OER and asked for 6 issues for reflection and 6 quiz questions. Fast as a flash they were done and are (in my view) very good. If I had to have done it without the AI it would have taken me at least half a day.

For other things GPT4 is less useful. And I have to say that its English, although grammatically good, is both stilted and plain. It also has the tendency to use somewhat odd English words, which I had always ascribed to it writing American English. But it seems not. In a Guardian newspaper newsletter, Alex Hern reports on work by AI influencer Jeremy Nguyen, at the Swinburne University of Technology in Melbourne, who has highlighted ChatGPT’s tendency to use the word “delve” in responses.

I have to say that I don't think I have ever used delve in anything I have written And talking to my Spanish English speaking friends none of them even new what the work means, Anyway Jeremy Hguyen says no individual use of the word can be definitive proof of AI involvement, but at scale it’s a different story. When half a percent of all articles on research site PubMed contain the word “delve” – 10 to 100 times more than did a few years ago – it’s hard to conclude anything other than an awful lot of medical researchers using the technology to, at best, augment their writing.

And according to a dataset of 50,000 ChatGPT responses, its not the only one. It seems the ten most overused words are: Explore, Captivate, Tapestry, Leverage, Embrace, Resonate, Dynamic, Testament Delve, and Elevate.

Now back to my hypothesis that its the fault of our American cousins. According to Alex Hearn an army of human testers are given access to the raw outputs from Large Language Models like ChatGPT, and instructed to try it out: asking questions, giving instructions and providing feedback. This feedback may be just approving of disapproving the outputs, but can be "more advanced, even amounting to writing a model response for the next step of training to learn from." And, here is the rub: "large AI companies outsource the work to parts of the global south, where anglophonic knowledge workers are cheap to hire."

Now back to the word "Delve."

There’s one part of the internet where “delve” is a much more common word: the African web. In Nigeria, “delve” is much more frequently used in business English than it is in England or the US. So the workers training their systems provided examples of input and output that used the same language, eventually ending up with an AI system that writes slightly like an African.