Who owns your data?

Photo by Markus Spiske on Unsplash

Arguments over what data should be allowed to be used for training Large Language Models rumble on. Ironically it is LinkedIn which hosts hundreds of discussion is AI which is the latest villain.

The platform updated its policies to clarify data collection practices, but this led to user backlash and increased scrutiny over privacy violations. The lack of transparency regarding data usage and the automatic enrollment of users in AI training has resulted in a significant loss of trust. Users have expressed feeling blindsided by LinkedIn's practices.

In response to user concerns, LinkedIn has committed to updating its user agreements and improving data practices. However, skepticism remains among users regarding the effectiveness of these measures. LinkedIn has provided users with the option to opt out of AI training features through account settings. However, this does not eliminate previously collected data, leaving users uneasy about data handling.

However, it is worth noting that accounts from Europe are not affected at present. It seems that LinkedIn would be breaking European laws if they were to try to do the same within the European Union.

More generally, the UK Open Data Institute says "there is very little transparency about the data used in AI systems - a fact that is causing growing concern as these systems are increasingly deployed with real-world consequences. Key transparency information about data sources, copyright, and inclusion of personal information and more is rarely included by systems flagged within the Partnership for AI’s AI Incidents Database.

While transparency cannot be considered a ‘silver bullet’ for addressing the ethical challenges associated with AI systems, or building trust, it is a prerequisite for informed decision-making and other forms of intervention like regulation."

AI Governance

Open consultation on regulatory approaches for AI 

Following extensive expert consultations and discussions with parliamentarians, UNESCO have released a consultation paper in English for public consultation on AI governance.. 

UNESCO encourages stakeholders, including parliamentarians, legal experts, AI governance experts and the public, to review, and provide feedback on the different regulatory approaches for AI. You can read the consultation paper here

The Consultation Paper on AI Regulation is part of a broader effort by UNESCO, Inter-Parliamentary Union and Internet Governance Forum’s Parliamentary Track to engage parliamentarians globally and enhance their capacities in evidence-based policy making for AI.

The Paper has been developed through:

  • Literature review on AI regulation in different parts of the world.
  • A discussion on “The impact of AI on democracy, human rights and the rule of law” with parliamentarians from around the world at the IPU Assembly in Geneva, 23-27 March 2024.
  • Capacity building workshop co-designed and co-facilitated by UNESCO on 25 March 2024 at the IPU in Geneva and three webinars on the subject that were organized by IPU, UNESCO and the Internet Governance Forum (IGF) for parliamentarians to inform the development of the discussion paper.
  • Discussion with Members of Parliament at the Regional Summit of Parliamentarians on Artificial Intelligence in Latin America held in Buenos Aires on 13 and 14 June 2024. 

The deadline for comments is 19 September, 2024.

AI Governance

Open consultation on regulatory approaches for AI 

Following extensive expert consultations and discussions with parliamentarians, UNESCO have released a consultation paper in English for public consultation on AI governance.. 

UNESCO encourages stakeholders, including parliamentarians, legal experts, AI governance experts and the public, to review, and provide feedback on the different regulatory approaches for AI. You can read the consultation paper here

The Consultation Paper on AI Regulation is part of a broader effort by UNESCO, Inter-Parliamentary Union and Internet Governance Forum’s Parliamentary Track to engage parliamentarians globally and enhance their capacities in evidence-based policy making for AI.

The Paper has been developed through:

  • Literature review on AI regulation in different parts of the world.
  • A discussion on “The impact of AI on democracy, human rights and the rule of law” with parliamentarians from around the world at the IPU Assembly in Geneva, 23-27 March 2024.
  • Capacity building workshop co-designed and co-facilitated by UNESCO on 25 March 2024 at the IPU in Geneva and three webinars on the subject that were organized by IPU, UNESCO and the Internet Governance Forum (IGF) for parliamentarians to inform the development of the discussion paper.
  • Discussion with Members of Parliament at the Regional Summit of Parliamentarians on Artificial Intelligence in Latin America held in Buenos Aires on 13 and 14 June 2024. 

The deadline for comments is 19 September, 2024.

#AIinEd – Pontydysgu EU 2021-09-24 13:11:11

From the UK Open Data Institute:

This week, the UK government launched its first ‘National AI Strategy’, which aims to position the country as the ‘best place to live and work with AI’. The 10-year plan includes things like investing in access to data, using AI to benefit all sectors and regions (including using it for public benefit and towards goals like net zero), and governing data effectively.

Accountability and algorithmic systems

programming, computer language, program

geralt (CC0), Pixabay

There seems to be a growing awareness of the use and problems with algorithms – at least in the UK with what Boris Johnson called “a rogue algorithm” caused chaos in students exam results. It is becoming very apparent that there needs to be far more transparency in what algorithms are being designed to do.

Writing in Social Europe says “Algorithmic systems are a new front line for unions as well as a challenge to workers’ rights to autonomy.” She draws attention to the increasing surveillance and monitoring of workers at home or in the workplace. She says strong trade union responses are immediately required to balance out the power asymmetry between bosses and workers and to safeguard workers’ privacy and human rights. She also says that improvements to collective agreements as well as to regulatory environments are urgently needed.

Perhaps her most important argument is about the use of algorithms:

Shop stewards must be party to the ex-ante and, importantly, the ex-post evaluations of an algorithmic system. Is it fulfilling its purpose? Is it biased? If so, how can the parties mitigate this bias? What are the negotiated trade-offs? Is the system in compliance with laws and regulations? Both the predicted and realised outcomes must be logged for future reference. This model will serve to hold management accountable for the use of algorithmic systems and the steps they will take to reduce or, better, eradicate bias and discrimination.

Christina Colclough believes the governance of algorithmic systems will require new structures, union capacity-building and management transparency.I can’t disagree with that. But also what is needed is a greater understanding of the use of AI and algorithms – for good and for bad. This means an education campaign – in trade unions but also for the wider public to ensure that developments are for the good and not just another step in the progress of Surveillance Capitalism.