European Union, AI and data strategy

geralt (CC0), Pixabay Miapetra Kumpula-Natri is the rapporteur for the industry committe for European Parliament’s own-initiative  on data strategy and  a standing rapporteur on the World Trade Organization e-commerce negotiations in the European Parliament’s international trade committee. Writing in Social…

Learning about surveillance

eye, surveillance, privacy

GDJ (CC0), Pixabay

I found this on the Social Media Collective website. The Social Media Collective is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York.

Yesterday the Wayne County Prosecutor publicly apologized to the first American known to be wrongfully arrested by a facial recognition algorithm: a black man arrested earlier this year by the Detroit Police. The statement cited the unreliability of software, especially as applied to people of color.

With this context in mind, some university and high school instructors teaching about technology may be interested in engaging with the Black Lives Matter protests by teaching about computing, race, and surveillance.

I’m delighted that thanks to the generosity of Tawana Petty and others, ESC can share a module on this topic developed for an online course. You are free to make use of it in your own teaching, or you might just find the materials interesting (or shocking).

The lesson consists of a case study of Detroit’s Project Green Light, a new city-wide police surveillance system that involves automated facial recognition, real-time police monitoring, very-high-resolution imagery, cameras indoors on private property, a paid priority response system, a public/private partnership, and other distinctive features. The system has allegedly been deployed to target peaceful Black Lives Matter protesters.

Here is the lesson:

Race, Policing, and Detroit’s Project Green Light

Ethics in AI and Education

industry, industry 4, web

geralt (CC0), Pixabay

The news that IBM is pulling out of the facial recognition market and is calling for “a national dialogue” on the technology’s use in law enforcement has highlighted the ethical concerns around AI powered technology. But the issue is not just confined to policing: it is also a growing concern in education. This post is based on a section in a forthcoming publication on the use of Artificial Intelligence in Vocational Education and Training, produced by the Taccle AI Erasmus Plus project.

Much concern has been expressed over the dangers and ethics of Artificial Intelligence both in general and specifically in education.

The European Commission (2020) has raised the following general issues (Naughton, 2020):

  • human agency and oversight
  • privacy and governance,
  • diversity,
  • non-discrimination and fairness,
  • societal wellbeing,
  • accountability,
  • transparency,
  • trustworthiness

However, John Naughton (2020), a technology journalist from the UK Open University, says “the discourse is invariably three parts generalities, two parts virtue-signalling.” He points to the work of David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society who in January 2020 published an article in the Harvard Data Science Review on the question “Should we trust algorithms?” saying that it is trustworthiness rather than trust we should be focusing on. He suggests a set of seven questions one should ask about any algorithm.

  1. Is it any good when tried in new parts of the real world?
  2. Would something simpler, and more transparent and robust, be just as good?
  3. Could I explain how it works (in general) to anyone who is interested?
  4. Could I explain to an individual how it reached its conclusion in their particular case?
  5. Does it know when it is on shaky ground, and can it acknowledge uncertainty?
  6. Do people use it appropriately, with the right level of scepticism?
  7. Does it actually help in practice?

Many of the concerns around the use of AI in education have already been aired in research around Learning Analytics. These include issues of bias, transparency and data ownership. They also include problematic questions around whether or not it is ethical that students should be told whether they are falling behind or indeed ahead in their work and surveillance of students.

The EU working group on AI in Education has identified the following issues:

  • AI can easily scale up and automate bad pedagogical practices
  • AI may generate stereotyped models of students profiles and behaviours and automatic grading
  • Need for big data on student learning (privacy, security and ownership of data are crucial)
  • Skills for AI and implications of AI for systems requirements
  • Need for policy makers to understand the basics of ethical AI.

Furthermore, it has been noted that AI for education is a spillover from other areas and not purpose built for education. Experts tend to be concentrated in the private sector and may not be sufficiently aware of the requirements in the education sector.

A further and even more troubling concern is the increasing influence and lobbying of large, often multinational, technology companies who are attempting to ‘disrupt’ public education systems. Audrey Waters (2019), who is publishing a book on the history of “teaching machines”, says her concern “is not that “artificial intelligence” will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans — intellectually, emotionally, occupationally — will be reduced to machines.” “Perhaps nothing,” she says, “has become quite as naturalized in education technology circles as stories about the inevitability of technology, about technology as salvation. She quotes the historian Robert Gordon who asserts that new technologies are incremental changes rather than whole-scale alterations to society we saw a century ago. Many new digital technologies, Gordon argues, are consumer technologies, and these will not — despite all the stories we hear – necessarily restructure our world.

There has been considerable debate and unease around the AI based “Smart Classroom Behaviour Management System” in use in schools in China since 2017. The system uses technology to monitor students’ facial expressions, scanning learners every 30 seconds and determining if they are happy, confused, angry, surprised, fearful or disgusted. It provides real time feedback to teachers about what emotions learners are experiencing. Facial monitoring systems are also being used in the USA. Some commentators have likened these systems to digital surveillance.

A publication entitled “Systematic review of research on artificial intelligence applications in higher education- where are the educators?” (Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond & Franziska Gouverneur (2019) which reviewed 146 out of 2656 identified publications concluded that there was a lack of critical reflection on risks and challenges. Furthermore, there was a weak connection to pedagogical theories and a need for an exploration of ethical and educational approaches. Martin Weller (2020) says educational technologists are increasingly questioning the impacts of technology on learner and scholarly practice, as well as the long-term implications for education in general. Neil Selwyn (2014) says “the notion of a contemporary educational landscape infused with digital data raises the need for detailed inquiry and critique.”

Martin Weller (2020) is concerned at “the invasive uses of technologies, many of which are co-opted into education, which highlights the importance of developing an understanding of how data is used.”

Audrey Watters (2018) has compiled a list of the nefarious social and political uses or connections of educational technology, either technology designed for education specifically or co-opted into educational purposes. She draws particular attention to the use of AI to de-professionalise teachers. And Mike Caulfield (2016) in acknowledging the positive impact of the web and related technologies argues that “to do justice to the possibilities means we must take the downsides of these environments seriously and address them.”

References

Caulfield, M. (2016). Announcing the digital polarization initiative, an open pedagogy project [Blog post]. Hapgood. Retrieved from https://hapgood.us/2016/12/07/announcing-the-digital-polarization-initiative-an-open-pedagogy-joint/

European Commission (2020). White Paper on Artificial Intelligence – A European approach to excellence and trust. Luxembourg: Publications Office of the European Union.

Gordon, R. J. (2016). The Rise and Fall of American Growth – The U.S. Standard of Living Since the Civil War. Princeton University Press.

Naughton, J. (2020). The real test of an AI machine is when it can admit to not knowing something. Guardian. Retrieved from  https://www.theguardian.com/commentisfree/2020/feb/22/test-of-ai-is-when-machine-can-admit-to-not-knowing-something.

Spiegelhalter, D. (2020). Should We Trust Algorithms? Harvard Data Science Review. Retrieved from https://hdsr.mitpress.mit.edu/pub/56lnenzj, 27.02.2020.

Watters, A. (2019). Ed-Tech Agitprop. Retrieved from http://hackeducation.com/2019/11/28/ed-tech-agitprop,  27.02.2020

Weller, M (2020). 25 years of Ed Tech. Athabasca University: AU Press.

Pathways to Future Jobs

katielwhite91 (CC0), Pixabay

Even before the COVIP 19 crisis and the consequent looming economic recession labour market researchers and employment experts were concerned at the prospects for the future of work due to automation and Artificial Intelligence.

The jury is still out concerning the overall effect of automation and AI on employment numbers. Some commentators have warned of drastic cuts in jobs, more optimistic projections have speculated that although individual occupations may suffer, the end effect may even be an increase in employment as new occupations and tasks emerge.

There is however general agreement on two things. The first is that there will be disruption to may occupations, in some cases leasing to a drastic reduction in the numbers employed and that secondly the tasks involved in different occupations will change.

In such a situation it is necessary to provide pathways for people from jobs at risk due to automation and AI to new and hopefully secure employment. In the UK NESTA are running the CareerTech Challenge programme, aimed at using technology to support the English Government’s National Retraining Scheme. In Canada, the Brookfield Institute has produced a research report ‘Lost and Found, Pathways from Disruption to Employment‘, proposing a framework for identifying and realizing opportunities in areas of growing employment, which, they say “could help guide the design of policies and programs aimed at supporting mid-career transitions.”

The framework is based on using Labour Market Information. But, as the authors point out, “For people experiencing job loss, the exact pathways from shrinking jobs to growing opportunities are not always readily apparent, even with access to labour market information (LMI).”

The methodology is based on the identification of origin occupations and destination occupations. Origin occupations are jobs which are already showing signs of employment. Decline regardless of the source of th disruption. Destination jobs are future orientated jobs into which individuals form an origin occupation can be reasonably expected to transition. They are growing, competitive and relatively resilient to shocks.

Both origin and destination occupations are identified by an analysis of employment data.

They are matched by analysing the underlying skills, abilities, knowledge, and work activities they require. This is based on data from the O*Net program. Basically, the researchers were looking for a high 80 or 90 per cent match. They also were looking for destination occupations which would include an increase in pay – or at least no decrease.

But even then, some qualitative analysis is needed. For instance, even with a strong skills match, a destination occupation might require certification which would require a lengthy or expensive training programme. Thus, it is not enough to rely on the numbers alone. Yet od such pathways can be identified then it could be possible to provide bespoke training programmes to support people in moving between occupations.

The report emphasises that skills are not the only issue and discusses other factors that affect a worker’s journey, thereby, they say “grounding the model in practical realities. We demonstrate that exploring job pathways must go beyond skills requirements to reflect the realities of how people make career transitions.”

These could include personal confidence or willingness or ability to move for a new job. They also include the willingness of employers to look beyond formal certificates as the basis for taking on new staff.

The report emphasises the importance of local labour market information. That automation and AI are impacting very differently in different cities and regions is also shown in research from both Nesta and the Centre for Cities in the UK. Put quite simply in some cities there are many jobs likely to be hard hit by automation and AI, in other cities far less. Of course, such analysis is going to be complicated by COVID 19. Cities, such as Derby in the UK, have a high percentage of jobs in the aerospace industry and these previously seemed relatively secure: this is now not so.

In this respect there is a problem with freely available Labour Market Information. The Brookfield Institute researchers were forced to base their work on the Canadian 2006 and 2016 censuses which as they admit was not ideal. Tn the UK data on occupations and employment from the Office of National Statistics is not available at a city level and it is very difficult to match up qualifications to employment. If similar work is to be undertaken in the UK, there will be a need for more disaggregated local Labour Market Information, some of it which may already be being collected through city governments and Local Economic Partnerships.