Data governance, management and infrastructure

Photo by Brooke Cagle on Unsplash

The big ed-tech news this week is the merger of Anthology, an educational management company, with Blackboard who produce learning technology. But as Stephen Downes said "It's funny, though - the more these companies grow and the wider their enterprise capabilities become, the less relevant they feel, to me at least, to educational technology and online learning."

And there is a revealing quote in an Inside Higher Ed article about the merger. They quote Bill Bauhaus, Blackboards chairman, CEO and president as saying the power of the combined company will flow from its ability to bring data from across the student life cycle to bear on student and institutional performance. "We're on the cusp of breaking down the data silos: that often exist between administrative and academic departments on campuses, Bauhaus said.

So is the new company really about educational technology or is it in reality a data company. And this raises many questions about who owns student data, data privacy and how institutions manage data. A new UK Open Data Institute (ODI) Fellow Report: Data governance for online learning by Janis Wong explores the data governance considerations when working with online learning data, looking at how educational institutions should rethink how they can better manage, protect and govern online learning data and personal data.

In a summary of the report, the ODI say:

The Covid-19 pandemic has increased the adoption of technology in education by higher education institutions in the UK. Although students are expected to return to in-person classes, online learning and the digitisation of the academic experience are here to stay. This includes the increased gathering, use and processing of digital data.

They go on to conclude:

Within online and hybrid learning, university management needs to consider how different forms of online learning data should be governed, from research data to teaching data to administration and the data processed by external platforms.

Online and hybrid learning needs to be inclusive and institutions have to address the benefits to, and concerns of, students and staff as the largest groups of stakeholders in delivering secure and safe academic experiences. This includes deciding what education technology platforms should be used to deliver, record and store online learning content, by comparing the merits of improving user experience against potential risks to vast data collection by third parties.

Online learning data governance needs to be considered holistically, with an understanding of how different stakeholders interact with each other’s data to create innovative, digital means of learning. When innovating for better online learning practices, institutions need to balance education innovation with the protection of student and staff personal data through data governance, management and infrastructure strategies.

The full report is available from the ODI web site.

Artificial Intelligence and ethics

I have written before that despite the obvious ethical issues posed by Artificial Intelligence in general - and particular issues for education - I am not convinced by the various frameworks setting down rubrics for ethics, often voluntarily and often  developed by professionals from within the AI industry. But I am encouraged by UK Association for Learning Technology's (ALT) Framework for Ethical Learning Technology, released at their annual conference last week. Importantly, it builds on ALT’s professional accreditation framework, CMALT, which has been expanded to include ethical considerations for professional practice and research.

ALT say:

ALT’s Framework for Ethical Learning Technology (FELT) is designed to support individuals, organisations and industry in the ethical use of learning technology across sectors. It forms part of ALT’s strategic aim to strengthen recognition and representation for Learning Technology professionals from all sectors.  The need for such a framework has become increasingly urgent as Learning Technology has been adopted on a larger scale than ever before and as the leading professional body for Learning Technology in the UK, representing 3,500 Members, ALT is well placed to lead this effort. We define Learning Technology as the broad range of communication, information and related technologies that are used to support learning, teaching and assessment. We recognise the wider context of Learning Technology policy, theory and history as fundamental to its ethical, equitable and fair use.

More details and resources are available on the ALT website.

 

 

More on ethics and AI

abstract, geometric, world

insspirito (CC0), Pixabay

The discussion over the ethics of AI is hotting up. And Pew have produced yet another report around this issue. This commentary comes from Stephen Downes in his indispensable OL Daily newsletter.

This Pew report is essentially a collection of responses from experts on a set of questions related to ethics and AI (you can find my contribution on page 2). The question asked was, "By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?" The short answer was "no", for a variety of reasons. That doesn't mean good won't be produced by AI, but rather, the salient observation that AI won't be (and probably can't be) optimized for good. Seth Finkelstein (page 4) draws a nice analogy: "Just substitute ‘the internet’ for ‘AI’ here – ‘Was the internet mostly used in ethical or questionable ways in the last decade?’ It was/will be used in many ways, and the net result ends up with both good and bad, according to various social forces."

What do people think about Artificial Intelligence?

 

integration, data integration, data science

mcmurryjulie (CC0), Pixabay

Pew surveys have released a new study on public attitudes about science-related issues. One of teh issues they examined were public attitudes towards Artificial Intelligence.

They report: "Public sentiment about developments in artificial intelligence (AI) is mixed; majorities in most of the Asia-Pacific publics surveyed see AI as having a positive effect on society, while views in places such as the Netherlands, the UK, Canada and the U.S. are closely divided on this issue. There are similar divides over the societal impact from workplace automation using robotics."

"Publics surveyed outside of Asia tend to be more divided over the effects of AI for society, especially in the Netherlands, the UK, Canada and the U.S. In the Netherlands, for instance, about half (48%) think AI has been a good thing, while 46% say it has been bad for society. People in France are particularly skeptical: Just 37% say the development of artificial intelligence is a good thing for society."

"Ambivalence in some European countries about the development of AI echoes findings from a November 2019 Eurobarometer survey, which found Europeans overwhelmingly want to be informed when digital services or applications use artificial intelligence. In addition, about four-in-ten Europeans said they were concerned about the potential uses of AI leading to “situations where it is unclear who is responsible,” such as traffic accidents caused by autonomous vehicles. About a third were worried that the use of artificial intelligence could lead to more discrimination or to situations where there is nobody to complain to when problems occur. On the positive side, the Eurobarometer survey found half of Europeans thought AI could be used to improve medical care."

"The Pew Research Center survey finds that publics offer mixed views about the use of robots to automate jobs. Across the 20 publics, a median of 48% say such automation has mostly been a good thing, while 42% say it has been a bad thing."

"Majorities in four Asian publics see automation as good for society – Japan (68%), Taiwan (62%), South Korea (62%) and Singapore (61%) – as do about two-thirds (66%) in Sweden. Brazilians are the least likely to see this as a positive for society (29%), with nearly two-thirds (64%) saying the use of robots to automate human jobs has mostly been a bad thing for society."