Accountability and algorithmic systems

programming, computer language, program

geralt (CC0), Pixabay

There seems to be a growing awareness of the use and problems with algorithms – at least in the UK with what Boris Johnson called “a rogue algorithm” caused chaos in students exam results. It is becoming very apparent that there needs to be far more transparency in what algorithms are being designed to do.

Writing in Social Europe says “Algorithmic systems are a new front line for unions as well as a challenge to workers’ rights to autonomy.” She draws attention to the increasing surveillance and monitoring of workers at home or in the workplace. She says strong trade union responses are immediately required to balance out the power asymmetry between bosses and workers and to safeguard workers’ privacy and human rights. She also says that improvements to collective agreements as well as to regulatory environments are urgently needed.

Perhaps her most important argument is about the use of algorithms:

Shop stewards must be party to the ex-ante and, importantly, the ex-post evaluations of an algorithmic system. Is it fulfilling its purpose? Is it biased? If so, how can the parties mitigate this bias? What are the negotiated trade-offs? Is the system in compliance with laws and regulations? Both the predicted and realised outcomes must be logged for future reference. This model will serve to hold management accountable for the use of algorithmic systems and the steps they will take to reduce or, better, eradicate bias and discrimination.

Christina Colclough believes the governance of algorithmic systems will require new structures, union capacity-building and management transparency.I can’t disagree with that. But also what is needed is a greater understanding of the use of AI and algorithms – for good and for bad. This means an education campaign – in trade unions but also for the wider public to ensure that developments are for the good and not just another step in the progress of Surveillance Capitalism.

Algorithmic bias explained

Yesterday, UK Prime Minister blamed last weeks fiasco with public examinations on a “mutant algorithm”. This video by the  Institute for Public Policy Research provides a more rational view on why algorithms can go wrong. Algorithms, they say, risk magnifying human bias and error on an unprecedented scale. Rachel Statham explains how they work and why we have to ensure they don’t perpetuate historic forms of discrimination.

New report on Artificial Intelligence in Vocational Education and Training

The Taccle AI project has launched it’s 74 page report exploring the use of AI in policy, process and practice in VET. For VET teachers and trainers, there are many possible uses of AI including new opportunities for adapting learning content based on student’s needs, new processes for assessment, analysing possible bottlenecks in learners’ domain understanding and…

Digitalisation, Artificial Intelligence and Vocational Occupations and Skills

The Taccle AI project on Artificial Intelligence and Vocational Education and Training, has published a preprint  version of a paper which has been submitted of publication to the VET network of the European Research Association. The paper, entitled  Digitalisation, Artificial Intelligence and Vocational Occupations and Skills: What are the needs for training Teachers and Trainers, seeks to explore the impact AI and automation have on vocational occupations and skills and to examine what that means for teachers and trainers in VET. It looks at how AI can be used to shape learning and teaching processes, through for example, digital assistants which support teachers. It also focuses on the transformative power of AI that promises profound changes in employment and work tasks. The paper is based on research being undertaken through the EU Erasmus+ Taccle AI project. It presents the results of an extensive literature review and of interviews with VET managers, teachers and AI experts in five countries. It asks whether machines will complement or replace humans in the workplace before going to look at developments in using AI for teaching and learning in VET. Finally, it proposes extensions to the EU DigiCompEdu Framework for training teachers and trainers in using technology. The paper can be downloaded here.