More on ethics and AI

abstract, geometric, world

insspirito (CC0), Pixabay

The discussion over the ethics of AI is hotting up. And Pew have produced yet another report around this issue. This commentary comes from Stephen Downes in his indispensable OL Daily newsletter.

This Pew report is essentially a collection of responses from experts on a set of questions related to ethics and AI (you can find my contribution on page 2). The question asked was, "By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?" The short answer was "no", for a variety of reasons. That doesn't mean good won't be produced by AI, but rather, the salient observation that AI won't be (and probably can't be) optimized for good. Seth Finkelstein (page 4) draws a nice analogy: "Just substitute ‘the internet’ for ‘AI’ here – ‘Was the internet mostly used in ethical or questionable ways in the last decade?’ It was/will be used in many ways, and the net result ends up with both good and bad, according to various social forces."

What do people think about Artificial Intelligence?

 

integration, data integration, data science

mcmurryjulie (CC0), Pixabay

Pew surveys have released a new study on public attitudes about science-related issues. One of teh issues they examined were public attitudes towards Artificial Intelligence.

They report: "Public sentiment about developments in artificial intelligence (AI) is mixed; majorities in most of the Asia-Pacific publics surveyed see AI as having a positive effect on society, while views in places such as the Netherlands, the UK, Canada and the U.S. are closely divided on this issue. There are similar divides over the societal impact from workplace automation using robotics."

"Publics surveyed outside of Asia tend to be more divided over the effects of AI for society, especially in the Netherlands, the UK, Canada and the U.S. In the Netherlands, for instance, about half (48%) think AI has been a good thing, while 46% say it has been bad for society. People in France are particularly skeptical: Just 37% say the development of artificial intelligence is a good thing for society."

"Ambivalence in some European countries about the development of AI echoes findings from a November 2019 Eurobarometer survey, which found Europeans overwhelmingly want to be informed when digital services or applications use artificial intelligence. In addition, about four-in-ten Europeans said they were concerned about the potential uses of AI leading to “situations where it is unclear who is responsible,” such as traffic accidents caused by autonomous vehicles. About a third were worried that the use of artificial intelligence could lead to more discrimination or to situations where there is nobody to complain to when problems occur. On the positive side, the Eurobarometer survey found half of Europeans thought AI could be used to improve medical care."

"The Pew Research Center survey finds that publics offer mixed views about the use of robots to automate jobs. Across the 20 publics, a median of 48% say such automation has mostly been a good thing, while 42% say it has been a bad thing."

"Majorities in four Asian publics see automation as good for society – Japan (68%), Taiwan (62%), South Korea (62%) and Singapore (61%) – as do about two-thirds (66%) in Sweden. Brazilians are the least likely to see this as a positive for society (29%), with nearly two-thirds (64%) saying the use of robots to automate human jobs has mostly been a bad thing for society."

An ethical framework for Learning Technology

The Association for Learning Technologies in the UK  (ALT) has the strategic aim of strengthening recognition and representation for Learning Technology professionals from all sectors. one of the priorities Members identified for this year is to develop an ethical framework for Learning Technology. They have developed a professional accreditation framework, CMALT, and last year extended it to include ethical considerations for professional practice and research last year.

They are now developing a framework that can be used as a starting point for informing the ethical use of Learning Technology by professionals, institutions and industry and, they say, "have worked to define a set of ethical principles which will form the core of the new framework alongside tools including for example a checklist or reflective questionnaire, to help individuals, institutions and industry to see how these principles are put into action.:

They have now launched a  Consultation, open until 5 June 2021, and are looking for feedback and input via a questionnaire to help finalise the framework ahead of the launch in September.

AI and Inequality

web, network, information technology

geralt (CC0), Pixabay

I appreciate this is very short notice but at 1800 CEST today, Joseph Stiglitz is talking with Anton Korienek about AI and Inequality. The event is organised by the Centre for the Governance of AI.

The event's webpage says

Over the next decades, AI will dramatically change the economic landscape. It may also magnify inequality, both within and across countries. Joseph E. Stiglitz, Nobel Laureate in Economics, will join us for a conversation with Anton Korinek on the economic consequences of increased AI capabilities. They will discuss the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. Korinek and Stiglitz have co-authored several papers on the economic effects of AI.

Joseph Stiglitzis University Professor at Columbia University. He is also the co-chair of the High-Level Expert Group on the Measurement of Economic Performance and Social Progress at the OECD, and the Chief Economist of the Roosevelt Institute.  A recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979), he is a former senior vice president and chief economist of the World Bank and a former member and chairman of the US President's Council of Economic Advisers. Known for his pioneering work on asymmetric information, Stiglitz's research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalization.

Anton Korinek is an Associate Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Research Associate at the NBER, a Research Fellow at the CEPR and a Research Affiliate at the Centre for the Governance of AI. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence for macroeconomic dynamics and inequality.

Hopefully a recording will be available after the event and I will post it somewhere here.