Survey of 18000 workers finds use of Chat GPT widespread

Reihaneh Golpayegani & Cambridge Diversity Fund / Better Images of AI / Women and AI / CC-BY 4.0

I have been moaning lately about the quality of so called research and publications about education, learning and the. use of Generative AI. Well, the hype is showing no signs of dying down but there does seem to be some pretty good research beginning to emerge. And I understand it takes time to do research especially if you are trying to find out about the potential impact of AI on learning.

Anyway, one publication, not so much about formal education, but about the use of AI in work and its potential impact of employment, which I liked is a research article 'The unequal adoption of ChatGPT exacerbates existing inequalities among workers' by Anders Humlum and Emilie Vestergaard and published on December 30 of last year.

In the abstract they say:

We study the adoption of ChatGPT, the icon of Generative AI, using a large-scale survey linked to comprehensive register data in Denmark. Surveying 18,000 workers from 11 exposed occupations, we document that ChatGPT is widespread, especially among younger and less-experienced workers. However, substantial inequalities have emerged. Women are 16 percentage points less likely to have used the tool for work. Furthermore, despite its potential to lift workers with less expertise, users of ChatGPT earned slightly more already before its arrival, even given their lower tenure. Workers see a substantial productivity potential in ChatGPT but are often hindered by employer restrictions and a perceived need for training.

Somebody - and I cant remember who - usefully got Chat GPT to do a summary and published it on LinkedIn:

  1. 41% of employees said they have used ChatGPT for work tasks.
  2. Women are 16% less likely to ChatGPT for work than men.
  3. Marketing professionals are the most likely to use ChatGPT (at 65%). Financial professionals are the least likely to use it (at 12%)
  4. Less experienced and younger employees are more likely to use it. Every year of experience and age reduces likelihood of use by 0.6 & 0.7 percentage points.
  5. More highly paid professionals are likely to use it.
  6. Employees think ChatGPT can lead to big productivity gains in their job. They said that it could half the time to complete about 1/3 of their tasks. However many employees remain very uncertain about time savings from using the tech.
  7. Despite these perceived time savings, employee regular use remains limited. For instance among employees who think it will save 1/2 the time in their job, only about 1/3 intend to use it.
  8. Employees think ChatGPT can lead to big productivity gains in their job. They said that it could half the time to complete about 1/3 of their tasks. However many employees remain very uncertain about time savings from using the tech.
  9. Despite these perceived time savings, employee regular use remains limited. For instance among employees who think it will save 1/2 the time in their job, only about 1/3 intend to use it.
  10. Time saving may not lead to greater productivity. 37% of employees said they will not complete more tasks if ChatGPT can do it for them. 24% said they will devote more effort to using ChatGPT if it can save time.
  11. The use of ChatGPT is mainly driven by individual worker initiative rather than company policy and systems.
  12. Employees often face frictions in using ChatGPT. The limiting factors seem to be lack of training (42%) and company restrictions on use (32%). Restrictions on use was particularly high in the financial sector (82%). only 8% of employees reported fear of job loss as a reason for not using chat gpt.

I think the finding that the use of ChatGPT is mainly driven by individual worker initiative rather than company policy and systems nis interesting. It is reflected in our findings from the AI Pioneers project that most use is of GenAI in vocational education and training is mainly driven. by individual teacher initiative! But most research in learning or rather more commonly education, had focused on formal teaching and learning. But of course most people trying out GenAI are informal learners and there has been less insight into this.

About the image

This image is inspired by Virginia Woolf's A Room of One's Own. According to this essay, which is based on her lectures at Newnham College and Girton College, Cambridge University, two things are essential for a woman to write fiction: money and a room of her own. This image adds a new layer to this concept by bringing it into the Al era. Just as Woolf explored the meaning of “women and fiction”, defining “women and AI” is quite complex. It could refer to algorithms’ responses to inquiries involving women, the influence of trending comments on machine stereotypes, or the share of women in big tech. The list can go on and involve many different experiences of women with AI as developers, users, investors, and beyond. With all its complexity, Woolf’s ideas offer us insight: Allocating financial resources and providing safe spaces-in reality and online- is necessary for women to have positive interactions with AI and to be well-represented in this field.

AI and the future of jobs: An update

Elise Racine & The Bigger Picture / Better Images of AI / Web of Influence I / CC-BY 4.0

One feature of the ongoing debates around Generative AI is that almost everything seems to be contested. While the big tech companies are ever bullish about the prospects for their new applications, controversy continues about the wider societal impact of these tools, including on education and employment.

Despite the initial concerns of the impact of Generative AI on employment, it seemed that fears were overblown although this may now be changing. Even so replacement of staff by AI may depend not just on sectors and occupations but all on the organisation and size of companies. Of course the motivation of companies to invest in AI is to increase profits. And it may be that the scale of organisational and work flow change required to introduce more AI has led to smaller companies holding back, was indeed with the ongoing doubts about the reliability of Generative AI applications. However there are signs of increasing use of AI in the software industry, albeit for boosting the speed to developing code, leading to higher productivity, and with more aggressive companies like Meta’s CEO Zuckerberg saying AI will replace mid-level engineers at Facebook, Instagram, and WhatsApp by 2025. Zuckerberg recently said that Meta and other tech companies are working on developing AI systems that are able to do complex coding with minimum human interactions. There is little doubt that creative jobs in the media film and advertising industries are coming under pressure with the increasing adoption of AI. The World Economic Forum (WEF) recently released its Future of Jobs Report 2025, including the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report also finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030. Of course the key word here is “could”.

There are two ned developments which are worrying for future jobs. The first is AI agents which are the latest products from the big tech industry. These are designed to split up work tasks and undertake the tasks semi autonomously. But for all the hype t remains to be seen how effective such agents might be. And the second is the increasingly use of AI for training robots. Robots have previously been difficult and expensive to train. AI may substantially reduce the cost of training leading to a new wave of automation in many industries.

But all this is speculations and finding reliable research remains a challenge. From an education and training perspective it seems to point to the importance of AI literacy *as an extension of digital literacy) and the need to ramp up continuing training for employees whose work is changing as a result of AI. Interestingly the WEF report found that 77 percent of surveyed firms will launch retraining programs to help current workers collaborate with AI systems between 2025 and 2030.

About the Image

'Web of Influence I' is part of the artist's series, 'The Bigger Picture': exploring themes of digital doubles, surveillance, omnipresence, ubiquity, and interconnectedness. Adobe FireFly was used in the production of this image, using consented original material as input for elements of the images. Elise draws on a wide range of her own artwork from the past 20 years as references for style and composition and uses Firefly to experiment with intensity, colour/tone, lighting, camera angle, effects, and layering.

Developing technology in Europe: are industrial clusters the way forward?

Look at a list of the ten tech companies with the highest market valuation as of mid-November. With the exception of the Taiwanese semiconductor giant TSMC, all are American; no European company even comes close.

In an article entitled Why Does U.S. Technology Rule? in his new (free) newsletter, Krugman Wonks Out, renowned economist Paul Krugman examines the reasons for such American domination. Krugman points out America is a big country, “yet our tech giants all come from a small part of that big nation”. Six of the companies on the list are based in Silicon Valley, he says, and while Tesla has moved its headquarters to Austin, it was Silicon Valley-based when it made electric cars cool and the other two are based in Seattle, which is sort of a secondary technology cluster.

Yet discussion and to an extent policy direction has focused on things like excessive regulation in Europe, a financial culture that is not willing to take risks and so on. This is the reason often cited for American large companies dominating the development of AI.

Krugman goes on to say:

I’m not saying that none of this is relevant. But one way to think about technology in a global economy is that development of any particular technology tends to concentrate in a handful of geographical clusters, which have to be somewhere — and when it comes to digital technology these clusters, largely for historical reasons, are in the United States. To oversimplify, maybe we’re not really talking about American tech dominance; we’re talking about Silicon Valley dominance.

He ascribes European lower historic levels of G.D.P. per capita than the United States, to shorter Europeans working hours, including mandatory holiday pay, “while America was (and is) the no-vacation nation.” Europeans had less stuff but more time, he says “and it was certainly possible to argue that they were making the right choice.”

Indeed he goes on to say that analysis shows that excluding the main ICT sectors (the manufacturing of computers and electronics and information and communication activities) , EU productivity has been broadly at par with the US in the period 2000-2019. 

Besides technology, the US also has high productivity growth in professional services and finance and insurance, reflecting strong ICT technology diffusion effects.

Industrial clusters have a key impact in developing and exchanging knowledge as happened in the past in the cutlery industry in Sheffield, “but the same logic, especially local diffusion of knowledge, applies to tech in Silicon Valley, or finance in Manhattan:”

Krugman concludes by asking two big further questions.

First, to what extent does high productivity in a few geographical clusters trickle down to the rest of the economy? Second, is there any way Europe can make a dent in these U.S. Advantages?

This article caught my attention because in the end of the last century there was a big discussion about the role of industrial clusters in Europe. Cedefop published a book focusing on knowledge, education and training and clusters for a European US conference held in Akron.

I've finally managed to find a digital copy of the book and will summarise some of the ideas. But a big question for me is if and how policies at a national and regional level can support the development of regional industrial clusters in Europe and what impact this might have in developing knowledge in key sectors including technology and AI. What can we do to make such knowledge clusters happen?

Is it inevitable that AI will make us lazier?

Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0

I. probably spend too much time reading newsletters but its a good way to start the day. And one of my favorite morning newsletters is Memex 1.1 by educator and journalist John Naughton, which arrives three times a week at 7 in the morning. Today's edition drew attention to the transcript of an interesting El Pais interview with Ethan Mollick who, Naughton says, is one of the most interesting and insightful writers on ‘AI’. Mollick teaches at the Wharton business school at the University of Pennsylvania, and from the outset has viewed the technology as an augmentation of human capabilities. I've ordered a copy of his book, Co-Intelligence, and will provide a longer review when I have read it. But from the interview Mollick says the best advice from the book is to spend 10 hours with AI and apply it to everything you do. For whatever reason, very few people are actually spending the time they need to really understand these systems.

Here are a few of the questions and answers from the El Pais interview.

Q. You don’t like to call AI a crutch.

A. The crutch is a dangerous approach because if we use a crutch, we stop thinking. Students who use AI as a crutch don’t learn anything. It prevents them from thinking. Instead, using AI as co-intelligence is important because it increases your capabilities and also keeps you in the loop.

Q. Isn’t it inevitable that AI will make us lazier?

A. Calculators also made us lazier. Why aren’t we doing math by hand anymore? You should be taking notes by hand now instead of recording me. We use technology to take shortcuts, but we have to be strategic in how we take those shortcuts.

Q. Why should we approach artificial intelligence with a strategy?

A. AI does so many things that we need to set guardrails on what we don’t want to give up. It’s a very weird, general-purpose technology, which means it will affect all kinds of things, and we’ll have to adjust socially. We did a very bad job with the last major social adjustment, social media. This time we need to be more deliberate.

About the image

This image represents the interest of technology companies (including but not limited to AI) in the data produced by students. Young students at computers with retinal scanners on their screens suggest the uptake not only of data entry but also of biometric data. The representation of pixelation, binary code, and the data-wave heatmap at the top suggest the ways that student work - and bodies - are abstracted by the datafication process. Design created using public domain images and effects in Rawpixel.

How might AI support how people learn outside the classroom?

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Data Mining 3 / CC-BY 4.0

Every day hundreds of posts are written on social media about AI and education. Every day yet more papers are published about AI and education. Webinars, seminars and conferences about AI and education. Yet nearly all of them are about formal education, education in the classroom. But as Stephen Downes says in a commentary on a blog by Alan Levine we need more on how people actually teach and actually learn. "We get a lot in the literature about how it happens in the classroom. But the classroom is a very specialized environment, designed to deal with the need to foster a common set of knowledge and values on a large population despite constraints in staff and resources. But if we go out into homes or workplaces, we see teaching and learning happening all the time..."

And of course people learn in different ways - through being showed how to do something, through watching a video, through working, playing and talking. Sadly in all these discussions about AI and education there is little about how people learn and even less on how AI might support (or hinder) informal learning.