Artificial Intelligence and ethics

I have written before that despite the obvious ethical issues posed by Artificial Intelligence in general - and particular issues for education - I am not convinced by the various frameworks setting down rubrics for ethics, often voluntarily and often  developed by professionals from within the AI industry. But I am encouraged by UK Association for Learning Technology's (ALT) Framework for Ethical Learning Technology, released at their annual conference last week. Importantly, it builds on ALT’s professional accreditation framework, CMALT, which has been expanded to include ethical considerations for professional practice and research.

ALT say:

ALT’s Framework for Ethical Learning Technology (FELT) is designed to support individuals, organisations and industry in the ethical use of learning technology across sectors. It forms part of ALT’s strategic aim to strengthen recognition and representation for Learning Technology professionals from all sectors.  The need for such a framework has become increasingly urgent as Learning Technology has been adopted on a larger scale than ever before and as the leading professional body for Learning Technology in the UK, representing 3,500 Members, ALT is well placed to lead this effort. We define Learning Technology as the broad range of communication, information and related technologies that are used to support learning, teaching and assessment. We recognise the wider context of Learning Technology policy, theory and history as fundamental to its ethical, equitable and fair use.

More details and resources are available on the ALT website.

 

 

More on ethics and AI

abstract, geometric, world

insspirito (CC0), Pixabay

The discussion over the ethics of AI is hotting up. And Pew have produced yet another report around this issue. This commentary comes from Stephen Downes in his indispensable OL Daily newsletter.

This Pew report is essentially a collection of responses from experts on a set of questions related to ethics and AI (you can find my contribution on page 2). The question asked was, "By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?" The short answer was "no", for a variety of reasons. That doesn't mean good won't be produced by AI, but rather, the salient observation that AI won't be (and probably can't be) optimized for good. Seth Finkelstein (page 4) draws a nice analogy: "Just substitute ‘the internet’ for ‘AI’ here – ‘Was the internet mostly used in ethical or questionable ways in the last decade?’ It was/will be used in many ways, and the net result ends up with both good and bad, according to various social forces."

What do people think about Artificial Intelligence?

 

integration, data integration, data science

mcmurryjulie (CC0), Pixabay

Pew surveys have released a new study on public attitudes about science-related issues. One of teh issues they examined were public attitudes towards Artificial Intelligence.

They report: "Public sentiment about developments in artificial intelligence (AI) is mixed; majorities in most of the Asia-Pacific publics surveyed see AI as having a positive effect on society, while views in places such as the Netherlands, the UK, Canada and the U.S. are closely divided on this issue. There are similar divides over the societal impact from workplace automation using robotics."

"Publics surveyed outside of Asia tend to be more divided over the effects of AI for society, especially in the Netherlands, the UK, Canada and the U.S. In the Netherlands, for instance, about half (48%) think AI has been a good thing, while 46% say it has been bad for society. People in France are particularly skeptical: Just 37% say the development of artificial intelligence is a good thing for society."

"Ambivalence in some European countries about the development of AI echoes findings from a November 2019 Eurobarometer survey, which found Europeans overwhelmingly want to be informed when digital services or applications use artificial intelligence. In addition, about four-in-ten Europeans said they were concerned about the potential uses of AI leading to “situations where it is unclear who is responsible,” such as traffic accidents caused by autonomous vehicles. About a third were worried that the use of artificial intelligence could lead to more discrimination or to situations where there is nobody to complain to when problems occur. On the positive side, the Eurobarometer survey found half of Europeans thought AI could be used to improve medical care."

"The Pew Research Center survey finds that publics offer mixed views about the use of robots to automate jobs. Across the 20 publics, a median of 48% say such automation has mostly been a good thing, while 42% say it has been a bad thing."

"Majorities in four Asian publics see automation as good for society – Japan (68%), Taiwan (62%), South Korea (62%) and Singapore (61%) – as do about two-thirds (66%) in Sweden. Brazilians are the least likely to see this as a positive for society (29%), with nearly two-thirds (64%) saying the use of robots to automate human jobs has mostly been a bad thing for society."