DeepSeek: Innovation with Open Source Software

Elise Racine / Better Images of AI / Morning View / CC-BY 4.0

As I write this post, the newspaper headlines are focused on the record decline in value of tech shares, especially chip manufacturer Nvidia, following the release of the Open Source DeepSeek Large Language Model and platform.

Most of the extensive news coverage has focused on the tech business and the likelihood that it represents a bubble, especially for the Generative AI companies, OpenAi, Anthrpopic Google and the like. The other ,main focus has been geopolitical, with China having caught up with the USA in AI development.

For education, DeepSeek can be seen as good news. The domination of Generative AI by large tech companies has priced out public sector EdTech development. Here the big news is that DeepSeek is open source, freely available on the HuggingFace platform. The company is purely focused on research rather than commercial products – the DeepSeek assistant and underlying code can be downloaded for free, while DeepSeek’s models are also cheaper to operate than OpenAI’s o1.

As the Guardian newspaper reports, Dr Andrew Duncan, the director of science & innovation at the UK’s Alan Turing Institute, said the DeepSeek development was “really exciting” because it “democratised access” to advanced AI models by being an open source developer, meaning it makes its models freely available – a path also followed by Mark Zuckerberg’s Meta with its Llama model.
“Academia and the private sector will be able to play around and explore with it and use it as a launching,” he said.

Duncan added: “It demonstrates that you can do amazing things with relatively small models and resources. It shows that you can innovate without having the massive resources, say, of OpenAI.”
Its interesting to hear the motivation put forward by DeepSeek CEO, for open source. As Alberto Romero has written in his Algorithmic Bridge newsletter, in a rare interview for AnYong Waves, a Chinese media outlet, DeepSeek CEO Liang Wenfeng emphasized innovation as the cornerstone of his ambitious vision:

we believe the most important thing now is to participate in the global innovation wave. For many years, Chinese companies are used to others doing technological innovation, while we focused on application monetization—but this isn’t inevitable. In this wave, our starting point is not to take advantage of the opportunity to make a quick profit, but rather to reach the technical frontier and drive the development of the entire ecosystem.

DeepSeek has developed a completely different approach to the big tech companies who have focused on ever increasing use of hardware to scale Large language Models. And is so doing they have greatly reduced both the financial and the environmental cost of developing such technologies. This may well be a vindication of the policy of prioritising educational spending in China. Romero point out startups and universities can train top AI models and world-class human talent respectively in China. He says “DeepSeek—contrary to Google, OpenAI, and Anthropic—publishes a lot of papers on frontier research, architectures, training regimes, technical decisions, innovative approaches to AI, and even things that didn’t work.”

DeepSeek doesn’t solve all the problems associated with Generative AI. It doesn’t stop the so called hallucinations, it doesn’t solve the problems with bias and it is still only a productive language model. But through innovation DeepSeek has greatly reduced the need for vast amounts of power, has shown there are alternative approaches to demand for leakage amounts of capital investments and has provided an free and Open Source model for further research and innovation.

About the image

'Morning View' is part of the artist's series, 'Algorithmic Encounters': By overlaying AI-generated annotations onto everyday scenes, this series uncovers hidden layers of meaning, biases, and interpretations crafted by algorithms. It transforms the mundane into sites of dialogue, inviting reflection on how algorithms shape our understanding of the world.

The Hype and the Reality

Janet Turra & Cambridge Diversity Fund / Better Images of AI / Ground Up and Spat Out / CC-BY 4.0

Its probably fair to say that the hype around Generative AI far outstrips the reality. Perhaps that is because in the slow process of discovering actual jobs which AI can do, copy editing and advertising is one of those at the forefront.And goven that the AI companies are very keen on hyping their products, there is an endless stream of artciples saying how wonderful AI is for almost every occupation. These articles almost always talk about how much time AI saves and how this increases productivity.

Here's an example from a company called Screenloop:

"AI is not simply a tool—it’s a disruptor. In an era where agility and efficiency are non-negotiable, the role of AI has extended beyond merely assisting recruiters— it’s reshaping the very fabric of how we think about talent acquisition. What was once seen as an administrative task is now a strategic role in defining a company’s future.

Now that AI tools can handle resume screening, scheduling, and even initial candidate engagement, recruiters find themselves in a unique position. The question now isn't just 'how much time can we save?' but rather, 'how can we strategically reinvest it? How can we maximise the impact of the time AI gives back to us?'

Speed and Precision

AI's impact on recruitment processes, particularly in screening, is undeniable. Tools that can parse thousands of resumes in seconds have revolutionised the way talent acquisition operates. But the real advantage isn't just speed—it’s precision. By filtering out irrelevant applications and flagging the most promising candidates based on predetermined criteria, AI doesn't just make processes faster, it makes them smarter.

Enhanced Candidate Experience

A significant benefit of AI in recruitment is the potential to enhance candidate experience. In fact, companies using AI in hiring have reported a 30% increase in candidate satisfaction, not because the tools replaced human interaction, but because they created space for more meaningful engagement. When AI handles repetitive tasks, recruiters can focus on optimising the candidate experience based on insights collected by tools such as Screenloop's candidate pulse solution. And this matters—experience drives employer brand perception in a competitive market."

And so on.......

But now the reality. Charlie Ball, UK Jisc's head of labour market intelligence, has published his annual forecast of what's to come in the labour market in the year ahead. And one of his predictions is about recruitment! Here is what he says:

AI may not take your jobs but it's a headache in recruitment.

As the ISE have been telling us in detail, AI is not, so far, displacing loads of jobs as might have been feared a couple of years ago, but it's still having quite an impact. AI is good at writing covering letters and CVs, and so it makes sense for candidates to use them, and so they do. That means it's a lot easier for candidates to write a lot of relatively good job applications, quickly, and so that's exactly what they do.

This means everyone is applying for all the jobs available, so even though there are actually more jobs than there used to be, they're all getting more applicants, all using the same tools, with largely identical applications and recruiters are swamped, which means they have to spend more resources to administer a recruitment round, which ultimately makes recruitment harder and more expensive. That may start to have an effect on vacancy numbers.

What recruiters want to do is encourage applicants to use AI well - after all, it's likely to be a useful business skill - and discourage it being used badly. So they don't want to stop it entirely, but do expect a lot more talk this year about how to limit it being used in applications. And a lot of talk from online hustlers claiming they have a magic solution to make your applications foolproof using AI, of course.

About the feature image

The outputs of Large Language Models do seem uncanny often leading people to compare the abilities of these systems to thinking, dreaming or hallucinating. This image is intended to be a tongue-in-cheek dig, suggesting that AI is at its core, just a simple information ‘meat grinder,’ feeding off the words, ideas and images on the internet, chopping them up and spitting them back out. The collage also makes the point that when we train these models on our biased, inequitable world the responses we get cannot possibly differ from the biased and inequitable world that made them. Attributions - Studio of: Willem van de Velde II, Michele Tosini https://nationalgalleryimages.ie/groupitem/40/ This image was created using Canva: www.canva.com

Survey of 18000 workers finds use of Chat GPT widespread

Reihaneh Golpayegani & Cambridge Diversity Fund / Better Images of AI / Women and AI / CC-BY 4.0

I have been moaning lately about the quality of so called research and publications about education, learning and the. use of Generative AI. Well, the hype is showing no signs of dying down but there does seem to be some pretty good research beginning to emerge. And I understand it takes time to do research especially if you are trying to find out about the potential impact of AI on learning.

Anyway, one publication, not so much about formal education, but about the use of AI in work and its potential impact of employment, which I liked is a research article 'The unequal adoption of ChatGPT exacerbates existing inequalities among workers' by Anders Humlum and Emilie Vestergaard and published on December 30 of last year.

In the abstract they say:

We study the adoption of ChatGPT, the icon of Generative AI, using a large-scale survey linked to comprehensive register data in Denmark. Surveying 18,000 workers from 11 exposed occupations, we document that ChatGPT is widespread, especially among younger and less-experienced workers. However, substantial inequalities have emerged. Women are 16 percentage points less likely to have used the tool for work. Furthermore, despite its potential to lift workers with less expertise, users of ChatGPT earned slightly more already before its arrival, even given their lower tenure. Workers see a substantial productivity potential in ChatGPT but are often hindered by employer restrictions and a perceived need for training.

Somebody - and I cant remember who - usefully got Chat GPT to do a summary and published it on LinkedIn:

  1. 41% of employees said they have used ChatGPT for work tasks.
  2. Women are 16% less likely to ChatGPT for work than men.
  3. Marketing professionals are the most likely to use ChatGPT (at 65%). Financial professionals are the least likely to use it (at 12%)
  4. Less experienced and younger employees are more likely to use it. Every year of experience and age reduces likelihood of use by 0.6 & 0.7 percentage points.
  5. More highly paid professionals are likely to use it.
  6. Employees think ChatGPT can lead to big productivity gains in their job. They said that it could half the time to complete about 1/3 of their tasks. However many employees remain very uncertain about time savings from using the tech.
  7. Despite these perceived time savings, employee regular use remains limited. For instance among employees who think it will save 1/2 the time in their job, only about 1/3 intend to use it.
  8. Employees think ChatGPT can lead to big productivity gains in their job. They said that it could half the time to complete about 1/3 of their tasks. However many employees remain very uncertain about time savings from using the tech.
  9. Despite these perceived time savings, employee regular use remains limited. For instance among employees who think it will save 1/2 the time in their job, only about 1/3 intend to use it.
  10. Time saving may not lead to greater productivity. 37% of employees said they will not complete more tasks if ChatGPT can do it for them. 24% said they will devote more effort to using ChatGPT if it can save time.
  11. The use of ChatGPT is mainly driven by individual worker initiative rather than company policy and systems.
  12. Employees often face frictions in using ChatGPT. The limiting factors seem to be lack of training (42%) and company restrictions on use (32%). Restrictions on use was particularly high in the financial sector (82%). only 8% of employees reported fear of job loss as a reason for not using chat gpt.

I think the finding that the use of ChatGPT is mainly driven by individual worker initiative rather than company policy and systems nis interesting. It is reflected in our findings from the AI Pioneers project that most use is of GenAI in vocational education and training is mainly driven. by individual teacher initiative! But most research in learning or rather more commonly education, had focused on formal teaching and learning. But of course most people trying out GenAI are informal learners and there has been less insight into this.

About the image

This image is inspired by Virginia Woolf's A Room of One's Own. According to this essay, which is based on her lectures at Newnham College and Girton College, Cambridge University, two things are essential for a woman to write fiction: money and a room of her own. This image adds a new layer to this concept by bringing it into the Al era. Just as Woolf explored the meaning of “women and fiction”, defining “women and AI” is quite complex. It could refer to algorithms’ responses to inquiries involving women, the influence of trending comments on machine stereotypes, or the share of women in big tech. The list can go on and involve many different experiences of women with AI as developers, users, investors, and beyond. With all its complexity, Woolf’s ideas offer us insight: Allocating financial resources and providing safe spaces-in reality and online- is necessary for women to have positive interactions with AI and to be well-represented in this field.

AI and the future of jobs: An update

Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

One feature of the ongoing debates around Generative AI is that almost everything seems to be contested. While the big tech companies are ever bullish about the prospects for their new applications, controversy continues about the wider societal impact of these tools, including on education and employment.

Despite the initial concerns of the impact of Generative AI on employment, it seemed that fears were overblown although this may now be changing. Even so replacement of staff by AI may depend not just on sectors and occupations but all on the organisation and size of companies. Of course the motivation of companies to invest in AI is to increase profits. And it may be that the scale of organisational and work flow change required to introduce more AI has led to smaller companies holding back, was indeed with the ongoing doubts about the reliability of Generative AI applications. However there are signs of increasing use of AI in the software industry, albeit for boosting the speed to developing code, leading to higher productivity, and with more aggressive companies like Meta’s CEO Zuckerberg saying AI will replace mid-level engineers at Facebook, Instagram, and WhatsApp by 2025. Zuckerberg recently said that Meta and other tech companies are working on developing AI systems that are able to do complex coding with minimum human interactions. There is little doubt that creative jobs in the media film and advertising industries are coming under pressure with the increasing adoption of AI. The World Economic Forum (WEF) recently released its Future of Jobs Report 2025, including the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report also finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030. Of course the key word here is “could”.

There are two ned developments which are worrying for future jobs. The first is AI agents which are the latest products from the big tech industry. These are designed to split up work tasks and undertake the tasks semi autonomously. But for all the hype t remains to be seen how effective such agents might be. And the second is the increasingly use of AI for training robots. Robots have previously been difficult and expensive to train. AI may substantially reduce the cost of training leading to a new wave of automation in many industries.

But all this is speculations and finding reliable research remains a challenge. From an education and training perspective it seems to point to the importance of AI literacy *as an extension of digital literacy) and the need to ramp up continuing training for employees whose work is changing as a result of AI. Interestingly the WEF report found that 77 percent of surveyed firms will launch retraining programs to help current workers collaborate with AI systems between 2025 and 2030.

About the Image

'Web of Influence I' is part of the artist's series, 'The Bigger Picture': exploring themes of digital doubles, surveillance, omnipresence, ubiquity, and interconnectedness. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Digital Pedagogies Rewilded

Ed Dingli for Fine Acts

I've written a lot about AI and education over the last year. I've not written so much about AI and learning and I'm going to try to remedy this in the next year. I've been writing for the AI Pioneers project in which Pontydysgu is a partner. But of course AI pioneers is not the only project around AI funded under the European Erasmus+ project.

And I very mush like the HlP - Hacking Innovative Pedagogies: Digital Education Rewilded Erasmus+  project carried out by the University of Graz, Aalborg University and Dublin City University.

They quote Beskorsa et al. (2023) saying:

Hacking innovative pedagogy means using existing methods or tools, spicing them up with creativity and curiosity and then using them to find new, exciting, or out-of-the- box solutions. It fosters experimentation, exploration, collaboration, and the integration of technology to promote critical thinking, problem solving and other key 21st century skills.

The web site is beautifully designed and a lot of fun.

And on February 20 and 21 they are holding a symposium in Dublin. This is the description:

A symposium for thinking otherwise about critical AI and post-AI pedagogies of higher education as part of the Erasmus+ Hacking Innovative Pedagogies: Digital Learning Rewilded (opens in a new tab)project.

This symposium aims to bring educators, learners, and interested others together to see how we might co-design futures beyond the calculative and output-obsessed forms which GenAI could funnel us into if we are not careful. It seeks to explore ways of teaching and learning that are based on mutualism, that recognise teaching as distributed activity and that honour our deep imaginative capacities for good (Czerniewicz & Cronin, 2023). We need to craft critical, creative and ethical responses in community to help address the multitude of issues now posed to educational assessment, future jobs, the environment, biases and increases in cyber-crime and deepfakes.

Come and help us think together during this event so as to rewild our pedagogical thinking and futures dreaming (Beskorsa et al, 2023; Lyngdorf et al 2024). In the words of Dr. Ruha Benjamin, we invite you to “invoke stories and speculation as surrogates, playing and poetry as proxies, and myths, visions, and narratives all as riffs on the imagination” (Benjamin, 2024 p. ix).

The symposium is free to attend, in person or online.