One of the major concerns about ethics and accountability with the fast growing adoption of AI in all areas of society including education is the transparency of algorithms. We know that they can suffer from bias: datasets for training AI often reflect wider prejudices withing society and amplify those prejudices. Nut perhaps even more concerning is that it is often impossible to understand how an algorithm is working: it is a black box.
The European Commission has recently announced that it is establishing the European Centre for Algorithmic Transparency (ECAT) which it says will contribute to a safer, more predictable and trusted online environment for people and business.
They go on to say:
“How algorithmic systems shape the visibility and promotion of content, and its societal and ethical impact, is an area of growing concern. Measures adopted under the Digital Services Act (DSA) call for algorithmic accountability and transparency audits.
The ECAT contributes with scientific and technical expertise to the European Commission’s exclusive supervisory and enforcement role of the systemic obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) provided for under the DSA.
Scientists and experts working at the ECAT will cooperate with industry representatives, academia, and civil society organisations to improve our understanding of how algorithms work: they will analyse transparency, assess risks, and propose new transparent approaches and best practices.”