Digital Pedagogies Rewilded

Ed Dingli for Fine Acts

I've written a lot about AI and education over the last year. I've not written so much about AI and learning and I'm going to try to remedy this in the next year. I've been writing for the AI Pioneers project in which Pontydysgu is a partner. But of course AI pioneers is not the only project around AI funded under the European Erasmus+ project.

And I very mush like the HlP - Hacking Innovative Pedagogies: Digital Education Rewilded Erasmus+  project carried out by the University of Graz, Aalborg University and Dublin City University.

They quote Beskorsa et al. (2023) saying:

Hacking innovative pedagogy means using existing methods or tools, spicing them up with creativity and curiosity and then using them to find new, exciting, or out-of-the- box solutions. It fosters experimentation, exploration, collaboration, and the integration of technology to promote critical thinking, problem solving and other key 21st century skills.

The web site is beautifully designed and a lot of fun.

And on February 20 and 21 they are holding a symposium in Dublin. This is the description:

A symposium for thinking otherwise about critical AI and post-AI pedagogies of higher education as part of the Erasmus+ Hacking Innovative Pedagogies: Digital Learning Rewilded (opens in a new tab)project.

This symposium aims to bring educators, learners, and interested others together to see how we might co-design futures beyond the calculative and output-obsessed forms which GenAI could funnel us into if we are not careful. It seeks to explore ways of teaching and learning that are based on mutualism, that recognise teaching as distributed activity and that honour our deep imaginative capacities for good (Czerniewicz & Cronin, 2023). We need to craft critical, creative and ethical responses in community to help address the multitude of issues now posed to educational assessment, future jobs, the environment, biases and increases in cyber-crime and deepfakes.

Come and help us think together during this event so as to rewild our pedagogical thinking and futures dreaming (Beskorsa et al, 2023; Lyngdorf et al 2024). In the words of Dr. Ruha Benjamin, we invite you to “invoke stories and speculation as surrogates, playing and poetry as proxies, and myths, visions, and narratives all as riffs on the imagination” (Benjamin, 2024 p. ix).

The symposium is free to attend, in person or online.

Developing technology in Europe: are industrial clusters the way forward?

Look at a list of the ten tech companies with the highest market valuation as of mid-November. With the exception of the Taiwanese semiconductor giant TSMC, all are American; no European company even comes close.

In an article entitled Why Does U.S. Technology Rule? in his new (free) newsletter, Krugman Wonks Out, renowned economist Paul Krugman examines the reasons for such American domination. Krugman points out America is a big country, “yet our tech giants all come from a small part of that big nation”. Six of the companies on the list are based in Silicon Valley, he says, and while Tesla has moved its headquarters to Austin, it was Silicon Valley-based when it made electric cars cool and the other two are based in Seattle, which is sort of a secondary technology cluster.

Yet discussion and to an extent policy direction has focused on things like excessive regulation in Europe, a financial culture that is not willing to take risks and so on. This is the reason often cited for American large companies dominating the development of AI.

Krugman goes on to say:

I’m not saying that none of this is relevant. But one way to think about technology in a global economy is that development of any particular technology tends to concentrate in a handful of geographical clusters, which have to be somewhere — and when it comes to digital technology these clusters, largely for historical reasons, are in the United States. To oversimplify, maybe we’re not really talking about American tech dominance; we’re talking about Silicon Valley dominance.

He ascribes European lower historic levels of G.D.P. per capita than the United States, to shorter Europeans working hours, including mandatory holiday pay, “while America was (and is) the no-vacation nation.” Europeans had less stuff but more time, he says “and it was certainly possible to argue that they were making the right choice.”

Indeed he goes on to say that analysis shows that excluding the main ICT sectors (the manufacturing of computers and electronics and information and communication activities) , EU productivity has been broadly at par with the US in the period 2000-2019. 

Besides technology, the US also has high productivity growth in professional services and finance and insurance, reflecting strong ICT technology diffusion effects.

Industrial clusters have a key impact in developing and exchanging knowledge as happened in the past in the cutlery industry in Sheffield, “but the same logic, especially local diffusion of knowledge, applies to tech in Silicon Valley, or finance in Manhattan:”

Krugman concludes by asking two big further questions.

First, to what extent does high productivity in a few geographical clusters trickle down to the rest of the economy? Second, is there any way Europe can make a dent in these U.S. Advantages?

This article caught my attention because in the end of the last century there was a big discussion about the role of industrial clusters in Europe. Cedefop published a book focusing on knowledge, education and training and clusters for a European US conference held in Akron.

I've finally managed to find a digital copy of the book and will summarise some of the ideas. But a big question for me is if and how policies at a national and regional level can support the development of regional industrial clusters in Europe and what impact this might have in developing knowledge in key sectors including technology and AI. What can we do to make such knowledge clusters happen?

Is it inevitable that AI will make us lazier?

Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0

I. probably spend too much time reading newsletters but its a good way to start the day. And one of my favorite morning newsletters is Memex 1.1 by educator and journalist John Naughton, which arrives three times a week at 7 in the morning. Today's edition drew attention to the transcript of an interesting El Pais interview with Ethan Mollick who, Naughton says, is one of the most interesting and insightful writers on ‘AI’. Mollick teaches at the Wharton business school at the University of Pennsylvania, and from the outset has viewed the technology as an augmentation of human capabilities. I've ordered a copy of his book, Co-Intelligence, and will provide a longer review when I have read it. But from the interview Mollick says the best advice from the book is to spend 10 hours with AI and apply it to everything you do. For whatever reason, very few people are actually spending the time they need to really understand these systems.

Here are a few of the questions and answers from the El Pais interview.

Q. You don’t like to call AI a crutch.

A. The crutch is a dangerous approach because if we use a crutch, we stop thinking. Students who use AI as a crutch don’t learn anything. It prevents them from thinking. Instead, using AI as co-intelligence is important because it increases your capabilities and also keeps you in the loop.

Q. Isn’t it inevitable that AI will make us lazier?

A. Calculators also made us lazier. Why aren’t we doing math by hand anymore? You should be taking notes by hand now instead of recording me. We use technology to take shortcuts, but we have to be strategic in how we take those shortcuts.

Q. Why should we approach artificial intelligence with a strategy?

A. AI does so many things that we need to set guardrails on what we don’t want to give up. It’s a very weird, general-purpose technology, which means it will affect all kinds of things, and we’ll have to adjust socially. We did a very bad job with the last major social adjustment, social media. This time we need to be more deliberate.

About the image

This image represents the interest of technology companies (including but not limited to AI) in the data produced by students. Young students at computers with retinal scanners on their screens suggest the uptake not only of data entry but also of biometric data. The representation of pixelation, binary code, and the data-wave heatmap at the top suggest the ways that student work - and bodies - are abstracted by the datafication process. Design created using public domain images and effects in Rawpixel.

How might AI support how people learn outside the classroom?

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Data Mining 3 / CC-BY 4.0

Every day hundreds of posts are written on social media about AI and education. Every day yet more papers are published about AI and education. Webinars, seminars and conferences about AI and education. Yet nearly all of them are about formal education, education in the classroom. But as Stephen Downes says in a commentary on a blog by Alan Levine we need more on how people actually teach and actually learn. "We get a lot in the literature about how it happens in the classroom. But the classroom is a very specialized environment, designed to deal with the need to foster a common set of knowledge and values on a large population despite constraints in staff and resources. But if we go out into homes or workplaces, we see teaching and learning happening all the time..."

And of course people learn in different ways - through being showed how to do something, through watching a video, through working, playing and talking. Sadly in all these discussions about AI and education there is little about how people learn and even less on how AI might support (or hinder) informal learning.

These technologies are complex….

Nadia Piet and AIxDESIGN & Archival Images of AI / Better Images of AI / Limits of Classification / CC-BY 4.0

Theres some pretty fearsome discussions going on this week between so called sceptics of Gen AI and supporters (although much of the shouting is over the terms of the debate).

Bur it seems pretty incontestable that the big AI technology providers are trying to muscle in on education as a promising market.

In a series of posts on LinkedIn, Ben Williamson from Edinburgh University has looked at the different initiatives by the companies who not surprisingly are giving incentives to sign up with their AI variant. Google he said almost literally buying institutions, with prime ministerial endorsement, to advance its AI interests. Google last week announced the launch the AI Campus, with UK Prime Minister Sir Keir Starmer attending “to show his support for our groundbreaking initiative to improve digital skills in the UK in our London home and his constituency.” The pilot They said will offer students access to cutting-edge resources on AI and machine learning, as well as offering mentoring and industry expertise from Google, Google DeepMind, and others.:

Meanwhile not to be outdone Amazon's cloud computing subsidiary AWS - actually its biggest profit centre - announced a $100 million program to provide AI skills to underserved kids.

But as Williamson pointed out that $100m is pretty restricted coming in the form of cloud credits as part of the AWS Education Equity Initiative."

These cloud credits, they say, “essentially act like cash that organizations can use to offset the costs of using AWS's cloud services. Recipients can then take advantage of AWS's comprehensive portfolio of cloud technology and advanced AI services..."

And Microsoft who have already locked in many institutions to their Teams App with all kinds of AI add ons telling education institutions they must upgrade their cloud contracts for purposes of data governance when they use generative AI even more

AI is increasingly seen as a vehicle to expand the cloud business in education, says Williamson, locking education institutions in to hard-to-cancel cloud contracts under the guise of claims about AI efficiencies and improvements in outcomes (unproven as they are). He believes AI in education can't be separated from the cloud biz model.

In an article on his substack newsletter Edward Ongweso Jr points out: "These technologies are complex: their origins, their development, the motivations driving their financing, the political projects they’re connected to, the products they’re integrated."

This shows a need to go beyond present understandings of AI Literacy to understand the activities, intentions and impact of the big technology companies. And for education, it further suggest the need to develop our own applications, based on open source software and independent from a reliance on these companies. Open AI which started with a mission to develop AI to benefit society, now makes no pretense of its profit driven motivation and if that means privatizing education that is not a barrier.

About the image

This image shows a gradual transformation from fish to woman and vice versa - questioning the rigid boundaries of classification and emphasising the fluid, in-between states where entities cannot be neatly boxed into one category or the other. It is particularly reminiscent of early image-generation GAN models, where images could be generated at the midpoint between two concepts in latent space, outputting entertaining visuals while highlighting the complexity, ambiguity, and limits of data labelling.