Digital Humanism

Image generated by DeepAI

I used to be critical at the failure / slowness of sociology to develop critical accounts of the impact of digitalisation and the rise of the internet, This seems to be fast changing, especially in the growing critiques of the use of technology for learning.

I just received an email about a new book by Christian Fuchs entitled 'Digital Humanism. A Philosophy for 21st Century Digital Society.' Christian says:

Our contemporary global digital society is not always a good place to live. Authoritarianism, hatred, false news, post-truth culture, the COVID-19 anti-vaccination movement, COVID-19 conspiracy theories, and political polarisation are organised via the Internet. The public sphere is highly polarised. Today, many humans tend to think of other humans mainly in terms of friends and enemies. Robots and Artificial Intelligence-based automation have created new challenges for the world of work. Decades of neoliberalism have increased inequalities. The COVID-19 pandemic has shown the vulnerability of humanity to viruses and health crises.

Humanity and society are in a major crisis and digitalisation mediates this crisis. /Digital Humanism/ explores how Humanism can help us to critically understand how digital technologies shape society and humanity, providing an introduction to Humanism in the digital age. Fuchs introduces the approach of Digital Humanism and outlines foundations of a Radical Digital Humanism, analysing what decolonisation of academia and the study of the digital, media and communication means; what the roles are of robots, automation, and Artificial Intelligence in digital capitalism, and how the communication of death and dying has been mediated by digital technologies, capitalist necropower, and digital capitalism. In order to save humanity and society, we need Radical Digital Humanism now.

And Eva Illouz, Director of Studies at EHESS, Paris, says:

Digital Humanism is the book we have been waiting for. … Digital Humanism refuses to transform humans into machines and to think of machines as humans. This is why this book is such an important and timely intervention.

More information and sample reading can we got from https://fuchsc.uti.at/books/digital-humanism/

AI for marking and feedback

The UK National Centre for AI, hosted through Jisc has announced the third in a series of pilot activities for AI in education. The pilot project being undertaken in partnership with Graide, an EdTech company who have built an AI-based feedback and assessment tool, es designed to help understand how universities could benefit from using AI to support the marking and feedback process.

Sue Attewell says:

AI-based marking and feedback tools promise the joint benefits of reducing educators’ workloads, whilst improving the quality, quantity, timeliness and/or consistency of feedback received by students.

After a positive initial assessment of Graide, we are launching this pilot to find out how Jisc’s members could benefit from this solution.

Universities in the UK have been invited to take part in the pilot in which following an initial webinar and interviews will a small number of participants will use Graide in practice, with an evaluation their experience. Stage two of the pilot will focus on exploring the platform’s functionality; in stage three, the platform will be used ‘live’ with at least one cohort of students.

Despite increasing interest in the potential of AI especially for providing automated feedback to students there remain limitations. It is notable that the pilot is focused on STEM and the UK National Centre for AI says that “The most appropriate types of assignments will be those where there is a definitive correct answer and where feedback would also be expected on the working out.”

Artificial Intelligence and Educational Inclusion

On May 6, Graham Attwell and Angela Karadog from Pontydysgu, together with our colleague George Bekiaridis from ACP in Athens, are taking part in a panel session at the CIISE International Congress on Social and Educational Inclusion at The University of the Basque Country in Bilbao, Spain.

The panel is being organised through the AI@School project, funded by the EU Erasmus+ programme with the theme of Artificial Intelligence and Educational Inclusion. UNESCO are promoting the use of AI in education, seeing it as a key technology for attaining the UN Sustainable Development Goals in making education available to all young people. Yet there remain persistent concerns over the ethics of AI and the growing commercialisation of education through educational technology.

The panel session will be streamed and you are all welcome to attend. But even more, please ask the panel your questions around inclusion and AI. We will be taking questions on the day. But we are also gathering questions in advance on a Google page. Just add your questions to the list. Ad if you would like us to name check you, add your name and where you are from.

Industry 4.0 and Vocational Education and Training

The Taccle AI and VET project has been working with the BBS 2 vocational school in Wolfsburg, Germany. The school has close links with industry, particularly Volkswagen who have a major manufacturing plant in Wolfsburg. They are developing a series of projects around Industry 4.0 which is largely based on digitalisation, data and the use of Artificial Intelligence. The school has recently produced a video in English (see bottom of page on the Foraus website) entitled Smart factory - Industry 4.0 in Vocational Education and Training. They say:

Teaching the complex interrelationships of Industry 4.0 in vocational training places new demands on training staff and makes modern teaching concepts necessary. At BBS 2 - the „Vocational School 2" - in Wolfsburg, this has led to a conceptual change in the vocational training of automation and mechatronics technicians.

In this deductive approach (from general to specific), the training begins with a digital overall system that serves as a model for professional action. System interrelationships, structure, modes of operation, malfunctions and problem solutions can be taught, learned and discovered using the model of a smart factory as an example. Based on this, the individual components and subsystems can then be understood and comprehended within the overall system.

In the classroom, the complex technologies and processes of Industry 4.0 become tangible in the truest sense of the word. Here, trainees for automation technology and dual students have developed and built a compact smart factory filling system themselves. It works with the same technical components as a production plant in industry.
To support the young people's independent learning, the trainees have developed a learning platform, which also serves the cooperation between training, school, production and industry partners.

AI – Humans must be in Command

The European Trade Union Confederation says its aim is "to ensure that the EU is not just a single market for goods and services, but is also a Social Europe, where improving the well being of workers and their families is an equally important priority. The European social model – until the onset of the crisis – helped Europe to become a prosperous, competitive region with high living standards."

The ETUC has published a policy proposal on the development and use of AI under the headline: AI Humans must be in Command.

"AI systems are data-driven technologies" they say. "Access to, and the ownership of, data are the core of AI technologies. Data has created a new business model for companies. However, the boundaries between private and non-private data are thin."

They continue:

Data is sensitive. AI innovations are not per se good and do not per se deliver positive outcomes for society. Access to and processing of data needs regulation for legal certainty and predictability, security and safety, and protection for all. Ethical principles are key. They should form a robust and reliable basis for business, workers and society. Ethical principles should be legally binding. Only under this condition will they provide a level playing field and fair competition. However, one AI regulation cannot fit all situations: consumer protection and worker protection need a differentiated approach.

An ambitious European AI regulatory framework should address the specificity of the workplace.  Humans must be in command. Any AI technology should enable humans remain in control. Workers must be able to opt out from the human-machine. The regulations must specifically address workers’ data protection and privacy and go beyond GDPR.

Digital skills are crucial. Workers need to be empowered and critically aware of what AI technology at work brings. They need to become “AI literate”. GDPR is a powerful tool that trade unions can use to exercise the “right to explanation”. Worker representatives should have a major role in ensuring this right at the workplace.

They conclude

AI needs a legally and empowering European framework based on human rights, public interest at the service of society, for the social and environmental wellbeing and common good. AI technologies will only deliver a fit for purpose innovation, if they comply with the Treaty based precautionary principle.