Accountability and algorithmic systems

programming, computer language, program

geralt (CC0), Pixabay

There seems to be a growing awareness of the use and problems with algorithms – at least in the UK with what Boris Johnson called “a rogue algorithm” caused chaos in students exam results. It is becoming very apparent that there needs to be far more transparency in what algorithms are being designed to do.

Writing in Social Europe says “Algorithmic systems are a new front line for unions as well as a challenge to workers’ rights to autonomy.” She draws attention to the increasing surveillance and monitoring of workers at home or in the workplace. She says strong trade union responses are immediately required to balance out the power asymmetry between bosses and workers and to safeguard workers’ privacy and human rights. She also says that improvements to collective agreements as well as to regulatory environments are urgently needed.

Perhaps her most important argument is about the use of algorithms:

Shop stewards must be party to the ex-ante and, importantly, the ex-post evaluations of an algorithmic system. Is it fulfilling its purpose? Is it biased? If so, how can the parties mitigate this bias? What are the negotiated trade-offs? Is the system in compliance with laws and regulations? Both the predicted and realised outcomes must be logged for future reference. This model will serve to hold management accountable for the use of algorithmic systems and the steps they will take to reduce or, better, eradicate bias and discrimination.

Christina Colclough believes the governance of algorithmic systems will require new structures, union capacity-building and management transparency.I can’t disagree with that. But also what is needed is a greater understanding of the use of AI and algorithms – for good and for bad. This means an education campaign – in trade unions but also for the wider public to ensure that developments are for the good and not just another step in the progress of Surveillance Capitalism.

Algorithmic bias explained

Yesterday, UK Prime Minister blamed last weeks fiasco with public examinations on a “mutant algorithm”. This video by the  Institute for Public Policy Research provides a more rational view on why algorithms can go wrong. Algorithms, they say, risk magnifying human bias and error on an unprecedented scale. Rachel Statham explains how they work and why we have to ensure they don’t perpetuate historic forms of discrimination.

Digitalisation, Artificial Intelligence and Vocational Occupations and Skills

The Taccle AI project on Artificial Intelligence and Vocational Education and Training, has published a preprint  version of a paper which has been submitted of publication to the VET network of the European Research Association. The paper, entitled  Digitalisation, Artificial Intelligence and Vocational Occupations and Skills: What are the needs for training Teachers and Trainers, seeks to explore the impact AI and automation have on vocational occupations and skills and to examine what that means for teachers and trainers in VET. It looks at how AI can be used to shape learning and teaching processes, through for example, digital assistants which support teachers. It also focuses on the transformative power of AI that promises profound changes in employment and work tasks. The paper is based on research being undertaken through the EU Erasmus+ Taccle AI project. It presents the results of an extensive literature review and of interviews with VET managers, teachers and AI experts in five countries. It asks whether machines will complement or replace humans in the workplace before going to look at developments in using AI for teaching and learning in VET. Finally, it proposes extensions to the EU DigiCompEdu Framework for training teachers and trainers in using technology. The paper can be downloaded here.

#AIinEd – Pontydysgu – Bridge to Learning 2020-07-22 17:43:29

As part  of the Taccle AI project, around the impact of AI on vocational education and training in Europe, we have undertaken interviews with managers, teachers, trainers and developers in five European countries (the report of the interviews, and of an accompanying literature review, will be published next week).  One of the interviews I made was with Aftab Hussein, the ILT manager at Bolton College in the north west  of Engand. Aftab describes himself on Twitter (@Aftab_Hussein) as “exploring the use of campus digital assistants and the computer assisted assessment of open-ended question.”

Ada, Bolton College’s campus digital assistant has been supporting student enquiries about college services and their studies since April 2017.In September 2020, the college is launching a new crowdsourcing project which seeks to teach Ada about subject topics. They are seeking the support of teachers to teach Ada about their subjects.

According to Aftab “Teachers will be able to set up questions that students typically ask about subject topics and they will have the opportunity to compose answers against each of these questions. No coding experience is required to set up questions and answers.Students of all ages will have access to a website where they will be able to select a subject chatbot and ask it questions. Ada will respond with answers that incorporate the use of text, images, links to resources and embedded videos.

The service will be free to use by teachers and students.”

If you are interested in supporting the project complete the online Google form.

AI and Young People

Last December, the Youth Department of the Council of Europe organised a seminar on Artificial Intelligence and its Impact on Young People. The aim of the seminar was to explore the issues, role and possible contributions of the youth sector in an effort to ensure that AI is responsibly used in democratic societies and that young people have a say about matters that concern their present and future. The seminar looked, among other things, into three dimensions of AI”

  • AI and democratic youth participation (including young people’s trust/interest in democracy)
  • AI and young people’s access to rights (including social rights)
  • AI and youth policy and youth work

According to the report of the seminar, the programme enabled the participants to put together their experience and knowledge in proposing answers to the following questions:

  • What are the impacts of on young people and how can young people benefit from it?
  • How can the youth sector make use of the capacities of to enhance the potential of youth work and youth policy provisions for the benefit of young people?
  • How to inform and “educate” young people about the potential benefits and risks of AI, notably in relation to young people’s human rights and democratic participation and the need to involve all young people in the process?
  • How does AI influence young people’s access to rights?
  • What should the youth sector of the Council of Europe, through the use of its various instruments and partners, do about AI in the future?

Not only is there a written report of the seminar but also an excellent illustrated report. Sadly it is not in a format that  can be embedded, but  it is well worth going to the Council of Europe’s web page on AI and scrolling to the bottom to see the report.