Photo by Markus Spiske on Unsplash

Arguments over what data should be allowed to be used for training Large Language Models rumble on. Ironically it is LinkedIn which hosts hundreds of discussion is AI which is the latest villain.

The platform updated its policies to clarify data collection practices, but this led to user backlash and increased scrutiny over privacy violations. The lack of transparency regarding data usage and the automatic enrollment of users in AI training has resulted in a significant loss of trust. Users have expressed feeling blindsided by LinkedIn's practices.

In response to user concerns, LinkedIn has committed to updating its user agreements and improving data practices. However, skepticism remains among users regarding the effectiveness of these measures. LinkedIn has provided users with the option to opt out of AI training features through account settings. However, this does not eliminate previously collected data, leaving users uneasy about data handling.

However, it is worth noting that accounts from Europe are not affected at present. It seems that LinkedIn would be breaking European laws if they were to try to do the same within the European Union.

More generally, the UK Open Data Institute says "there is very little transparency about the data used in AI systems - a fact that is causing growing concern as these systems are increasingly deployed with real-world consequences. Key transparency information about data sources, copyright, and inclusion of personal information and more is rarely included by systems flagged within the Partnership for AI’s AI Incidents Database.

While transparency cannot be considered a ‘silver bullet’ for addressing the ethical challenges associated with AI systems, or building trust, it is a prerequisite for informed decision-making and other forms of intervention like regulation."

Leave a Reply