AI for marking and feedback

The UK National Centre for AI, hosted through Jisc has announced the third in a series of pilot activities for AI in education. The pilot project being undertaken in partnership with Graide, an EdTech company who have built an AI-based feedback and assessment tool, es designed to help understand how universities could benefit from using AI to support the marking and feedback process.

Sue Attewell says:

AI-based marking and feedback tools promise the joint benefits of reducing educators’ workloads, whilst improving the quality, quantity, timeliness and/or consistency of feedback received by students.

After a positive initial assessment of Graide, we are launching this pilot to find out how Jisc’s members could benefit from this solution.

Universities in the UK have been invited to take part in the pilot in which following an initial webinar and interviews will a small number of participants will use Graide in practice, with an evaluation their experience. Stage two of the pilot will focus on exploring the platform’s functionality; in stage three, the platform will be used ‘live’ with at least one cohort of students.

Despite increasing interest in the potential of AI especially for providing automated feedback to students there remain limitations. It is notable that the pilot is focused on STEM and the UK National Centre for AI says that “The most appropriate types of assignments will be those where there is a definitive correct answer and where feedback would also be expected on the working out.”

Artificial Intelligence and Educational Inclusion

On May 6, Graham Attwell and Angela Karadog from Pontydysgu, together with our colleague George Bekiaridis from ACP in Athens, are taking part in a panel session at the CIISE International Congress on Social and Educational Inclusion at The University of the Basque Country in Bilbao, Spain.

The panel is being organised through the AI@School project, funded by the EU Erasmus+ programme with the theme of Artificial Intelligence and Educational Inclusion. UNESCO are promoting the use of AI in education, seeing it as a key technology for attaining the UN Sustainable Development Goals in making education available to all young people. Yet there remain persistent concerns over the ethics of AI and the growing commercialisation of education through educational technology.

The panel session will be streamed and you are all welcome to attend. But even more, please ask the panel your questions around inclusion and AI. We will be taking questions on the day. But we are also gathering questions in advance on a Google page. Just add your questions to the list. Ad if you would like us to name check you, add your name and where you are from.

Industry 4.0 and Vocational Education and Training

The Taccle AI and VET project has been working with the BBS 2 vocational school in Wolfsburg, Germany. The school has close links with industry, particularly Volkswagen who have a major manufacturing plant in Wolfsburg. They are developing a series of projects around Industry 4.0 which is largely based on digitalisation, data and the use of Artificial Intelligence. The school has recently produced a video in English (see bottom of page on the Foraus website) entitled Smart factory - Industry 4.0 in Vocational Education and Training. They say:

Teaching the complex interrelationships of Industry 4.0 in vocational training places new demands on training staff and makes modern teaching concepts necessary. At BBS 2 - the „Vocational School 2" - in Wolfsburg, this has led to a conceptual change in the vocational training of automation and mechatronics technicians.

In this deductive approach (from general to specific), the training begins with a digital overall system that serves as a model for professional action. System interrelationships, structure, modes of operation, malfunctions and problem solutions can be taught, learned and discovered using the model of a smart factory as an example. Based on this, the individual components and subsystems can then be understood and comprehended within the overall system.

In the classroom, the complex technologies and processes of Industry 4.0 become tangible in the truest sense of the word. Here, trainees for automation technology and dual students have developed and built a compact smart factory filling system themselves. It works with the same technical components as a production plant in industry.
To support the young people's independent learning, the trainees have developed a learning platform, which also serves the cooperation between training, school, production and industry partners.

AI – Humans must be in Command

The European Trade Union Confederation says its aim is "to ensure that the EU is not just a single market for goods and services, but is also a Social Europe, where improving the well being of workers and their families is an equally important priority. The European social model – until the onset of the crisis – helped Europe to become a prosperous, competitive region with high living standards."

The ETUC has published a policy proposal on the development and use of AI under the headline: AI Humans must be in Command.

"AI systems are data-driven technologies" they say. "Access to, and the ownership of, data are the core of AI technologies. Data has created a new business model for companies. However, the boundaries between private and non-private data are thin."

They continue:

Data is sensitive. AI innovations are not per se good and do not per se deliver positive outcomes for society. Access to and processing of data needs regulation for legal certainty and predictability, security and safety, and protection for all. Ethical principles are key. They should form a robust and reliable basis for business, workers and society. Ethical principles should be legally binding. Only under this condition will they provide a level playing field and fair competition. However, one AI regulation cannot fit all situations: consumer protection and worker protection need a differentiated approach.

An ambitious European AI regulatory framework should address the specificity of the workplace.  Humans must be in command. Any AI technology should enable humans remain in control. Workers must be able to opt out from the human-machine. The regulations must specifically address workers’ data protection and privacy and go beyond GDPR.

Digital skills are crucial. Workers need to be empowered and critically aware of what AI technology at work brings. They need to become “AI literate”. GDPR is a powerful tool that trade unions can use to exercise the “right to explanation”. Worker representatives should have a major role in ensuring this right at the workplace.

They conclude

AI needs a legally and empowering European framework based on human rights, public interest at the service of society, for the social and environmental wellbeing and common good. AI technologies will only deliver a fit for purpose innovation, if they comply with the Treaty based precautionary principle.

Open Data

Since the sad decline of RSS, we seem to have returned to newsletters as a major means of exchanging knowledge and information. And I subscribe to a lot - probably too many. I used to have a subscription to MIT's The Algorithm — a weekly artificial intelligence newsletter. And it was pretty good, although perhaps a little US centric. But then it was moved behind a paywall, costing 50 US dollars a year for online access. I don't really understand that MIT is so short of funding that they need subscriptions to fund their production of an online newsletter.

But my weekly data and technology fix is now being fulfilled by the excellent newsletter "The Week in Data", dropping in my email box for free every Friday from the UK Open Data Institute. Not only does it cover open data, from the Institute and the wiser world, but it increasingly focuses on Artificial Intelligence and ethical practice in the development and use of AI. Here are just a couple of items from last Friday's edition.

In keeping with the old adage that ‘prevention is better than cure’, NHS England and the Ada Lovelace Institute are piloting algorithmic impact assessments – with the aim to review any possible societal impacts of AI systems, before the systems are implemented. The trial aims to make sure risks, including algorithmic bias, are mitigated before they are applied to NHS data.

In other positive tech news, The Social Science Research Council has announced the upcoming launch of the Just Tech Platform. The website, highlighting justice and tech research, will include a free-to-use citation database based on an open-source citation library, and will feature leading academics in this field. The launch event on 1 March will feature, among others, Safiya ​​Umoja Noble – author of Algorithms of Oppression and ODI Summit keynote alumna.

To get your free subscription go to the Open Data Institute website.