Why does GenAI affect certain occupations more than others?

Elise Racine / Better Images of AI / Static / CC-BY 4.0

An article in the Financial Times by Carnegie Mellon University Professors Laurence Ales and Christophe Combemale says Generative AI is different from past automation and requires a shift from what AI can do to what it should do.

They put forward four pivotal questions for organisations when contemplating automation with Generative AI. First, how complex is the task? Second, how frequent is the task? Third, how interconnected are the tasks? Fourth, when executing a task, what is the cost of failure?

These questions, they say, should guide companies considering automation and help explain why GenAI affects certain occupations more than others. The go on to say "The four questions above highlight what makes generative AI unique as an automation technology. As it evolves, GenAI is demonstrating its ability to manage complex tasks at high speed, making it more versatile than traditional automation. By offering a seamless interface and natural language processing capabilities, GenAI progressively lowers fragmentation costs compared with traditional automation. However, the uncertainty surrounding the output of GenAI potentially increases the risk of failure in a task."

About the Image

'Static' is part of the artist's series, 'Back to Basics': A minimalist exploration of foundational digital elements like static and circuits, reflecting the raw, unembellished essence of technological systems.

The danger of Lock Ins

Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Analog Lecture on Computing / CC-BY 4.0

One fear from researcher in educational technology and AI is lock in. It happened before. Companies compete in giving a good deal for applications and services but lack of interoperability leaves educational organisation stuck if they want to get out or change providers. It was big news at one time with Learning Management Systems (LMS) but slowly the movement towards standards largely overcame that issue. But with the big tech AI companies still searching for convincing real world use cases and turning their eyes on education it seems it may be happening again.

OpenAI have said it will roll out an education-specific version of its chatbot to about 500,000 students and faculty at California State University as it looks to expand its user base in the academic sector and counter competition from rivals like Alphabet. The rollout will cover 23 campuses of the largest public university system in the United States, enabling students to access personalized tutoring and study guides through the chatbot, while the faculty will be able to use it for administrative tasks.

Rival Alphabet (that’s Google to you and me) has already been expanding into the education sector, where it has announced a $120 million investment fund for AI education programs and plans to introduce its GenAI chatbot Gemini to teen students' school-issued Google accounts."

And of course there is Microsoft who have been using sweetheart deals to their Office suite and email services to education providers effectively locking them in the Microsoft world including Microsoft’s AI.

About the Image

This surrealist collage is a visual narrative on education about AI. The juxtaposition of historical and contemporary images underscores the tension between established institutions of learning and the evolving, boundary-pushing nature of AI. The oversized keyboard, with the “A” and “I” keys highlighted in red, serves as a focal point, symbolising the dominance of AI in contemporary discourse, while the vintage image of the woman in historical attire kneeling at the outdated keyboard symbolises a reclamation of voices historically marginalised in technological innovation, drawing attention to the need for diverse perspectives in educating students about future of AI. Visually reimagining the classroom dynamic critiques the historical gatekeeping of AI knowledge and calls for an educational paradigm that values and amplifies diverse contributions.

Who uses Generative AI at work?

Shady Sharify / Better Images of AI / Who is AI Made Of / CC-BY 4.0

I picked up this from a blog by Doug Belshaw. It is from a report by Anthropic, the AI company behind Claude.ai. Doug points out that that its not been published in an academic journal and therefore not peer-reviewed, but, he says, they’ve open-sourced the dataset used for the analysis. And it certainly is interesting.

Here, we present a novel empirical framework for measuring AI usage across different tasks in the economy, drawing on privacy-preserving analysis of millions of real-world conversations on Claude.ai [Tamkin et al., 2024]. By mapping these conversations to occupational categories in the U.S. Department of Labor’s O*NET Database, we can identify not just current usage patterns, but also early indicators of which parts of the economy may be most affected as these technologies continue to advance.

We use this framework to make five key contributions:

1. Provide the first large-scale empirical measurement of which tasks are seeing AI use across the economy …Our analysis reveals highest use for tasks in software engineering roles (e.g., software engineers, data scientists, bioinformatics technicians), professions requiring substantial writing capabilities (e.g., technical writers, copywriters, archivists), and analytical roles (e.g., data scientists). Conversely, tasks in occupations involving physical manipulation of the environment (e.g., anesthesiologists, construction workers) currently show minimal use.

2. Quantify the depth of AI use within occupations …Only ∼ 4% of occupations exhibit AI usage for at least 75% of their tasks, suggesting the potential for deep task-level use in some roles. More broadly, ∼ 36% of occupations show usage in at least 25% of their tasks, indicating that AI has already begun to diffuse into task portfolios across a substantial portion of the workforce.

3. Measure which occupational skills are most represented in human-AI conversations ….Cognitive skills like Reading Comprehension, Writing, and Critical Thinking show high presence, while physical skills (e.g., Installation, Equipment Maintenance) and managerial skills (e.g., Negotiation) show minimal presence—reflecting clear patterns of human complementarity with current AI capabilities.

4. Analyze how wage and barrier to entry correlates with AI usage …We find that AI use peaks in the upper quartile of wages but drops off at both extremes of the wage spectrum. Most high-usage occupations clustered in the upper quartile correspond predominantly to software industry positions, while both very high-wage occupations (e.g., physicians) and low-wage positions (e.g., restaurant workers) demonstrate relatively low usage. This pattern likely reflects either limitations in current AI capabilities, the inherent physical manipulation requirements of these roles, or both. Similar patterns emerge for barriers to entry, with peak usage in occupations requiring considerable preparation (e.g., bachelor’s degree) rather than minimal or extensive training.

5. Assess whether people use Claude to automate or augment tasks …We find that 57% of interactions show augmentative patterns (e.g., back-and-forth iteration on a task) while 43% demonstrate automation-focused usage (e.g., performing the task directly). While this ratio varies across occupations, most occupations exhibited a mix of automation and augmentation across tasks, suggesting AI serves as both an efficiency tool and collaborative partner.

Source: The Anthropic Economic Index

About the image

This artwork captures humanity’s collective endeavour in building artificial intelligence, drawing inspiration from Persian Negargari (miniature painting). It emphasises that AI is not the result of sudden breakthroughs but centuries of collaboration among minds, cultures, and technologies. Inspired by Kamal-ud-Din Behzad’s late 15th-century painting “Construction of the Khawarnaq Palace” (circa 1494 CE), the piece celebrates human creativity and labor in crafting a structure that reaches toward the heavens. By blending traditional Persian Negargari with modern AI symbols—such as circuit boards, cloud storage, and digital glitches—the artwork underscores that AI is rooted in human effort, shaped over time through the fusion of data, labor, and algorithms, whilst also highlighting the interplay of heritage and technology. "

Council of Europe roadmap for Responsible AI in Education

Council of Europe Consilium

This week has seen extensive press coverage of the AI summit held in Paris with attendees from 60 countries. Despite the noise, not much seems to have happened. The summit revealed disagreements over regulation, particularly between the USA where the tech companies are lobbying to have none or the minimum regulation and Europe which is continuing to develop a regulatory framework.

The issue of regulation is important for AI in Education and the Council of Europe has published a roadmap for for Responsible AI in Education.

✅ 2025: European Year of Education for Digital Citizenship – Raising awareness & strengthening AI culture in education.
✅ 2025: Recommendation on AI in Teaching & Learning – A framework for responsibly integrating AI literacy into education.
✅ 2026: Common Repository for AI Evaluation – Assessing AI’s effectiveness and pedagogical value.
✅ 2026: Legal Instrument on AI in Education – Establishing rules to protect the integrity of education.
✅ 2027: White Paper on the future of the teaching profession in the Digital Age – Supporting and valuing educators in an AI-driven world

They say that through collaboration with policymakers, educators, and the EdTech community, they aim to shape an ethical, inclusive, and democratic digital future.

Council of Europe roadmap for Responsible AI in Education

Council of Europe Consilium

This week has seen extensive press coverage of the AI summit held in Paris with attendees from 60 countries. Despite the noise, not much seems to have happened. The summit revealed disagreements over regulation, particularly between the USA where the tech companies are lobbying to have none or the minimum regulation and Europe which is continuing to develop a regulatory framework.

The issue of regulation is important for AI in Education and the Council of Europe has published a roadmap for for Responsible AI in Education.

✅ 2025: European Year of Education for Digital Citizenship – Raising awareness & strengthening AI culture in education.
✅ 2025: Recommendation on AI in Teaching & Learning – A framework for responsibly integrating AI literacy into education.
✅ 2026: Common Repository for AI Evaluation – Assessing AI’s effectiveness and pedagogical value.
✅ 2026: Legal Instrument on AI in Education – Establishing rules to protect the integrity of education.
✅ 2027: White Paper on the future of the teaching profession in the Digital Age – Supporting and valuing educators in an AI-driven world

They say that through collaboration with policymakers, educators, and the EdTech community, they aim to shape an ethical, inclusive, and democratic digital future.