Developing technology in Europe: are industrial clusters the way forward?

Look at a list of the ten tech companies with the highest market valuation as of mid-November. With the exception of the Taiwanese semiconductor giant TSMC, all are American; no European company even comes close.

In an article entitled Why Does U.S. Technology Rule? in his new (free) newsletter, Krugman Wonks Out, renowned economist Paul Krugman examines the reasons for such American domination. Krugman points out America is a big country, “yet our tech giants all come from a small part of that big nation”. Six of the companies on the list are based in Silicon Valley, he says, and while Tesla has moved its headquarters to Austin, it was Silicon Valley-based when it made electric cars cool and the other two are based in Seattle, which is sort of a secondary technology cluster.

Yet discussion and to an extent policy direction has focused on things like excessive regulation in Europe, a financial culture that is not willing to take risks and so on. This is the reason often cited for American large companies dominating the development of AI.

Krugman goes on to say:

I’m not saying that none of this is relevant. But one way to think about technology in a global economy is that development of any particular technology tends to concentrate in a handful of geographical clusters, which have to be somewhere — and when it comes to digital technology these clusters, largely for historical reasons, are in the United States. To oversimplify, maybe we’re not really talking about American tech dominance; we’re talking about Silicon Valley dominance.

He ascribes European lower historic levels of G.D.P. per capita than the United States, to shorter Europeans working hours, including mandatory holiday pay, “while America was (and is) the no-vacation nation.” Europeans had less stuff but more time, he says “and it was certainly possible to argue that they were making the right choice.”

Indeed he goes on to say that analysis shows that excluding the main ICT sectors (the manufacturing of computers and electronics and information and communication activities) , EU productivity has been broadly at par with the US in the period 2000-2019. 

Besides technology, the US also has high productivity growth in professional services and finance and insurance, reflecting strong ICT technology diffusion effects.

Industrial clusters have a key impact in developing and exchanging knowledge as happened in the past in the cutlery industry in Sheffield, “but the same logic, especially local diffusion of knowledge, applies to tech in Silicon Valley, or finance in Manhattan:”

Krugman concludes by asking two big further questions.

First, to what extent does high productivity in a few geographical clusters trickle down to the rest of the economy? Second, is there any way Europe can make a dent in these U.S. Advantages?

This article caught my attention because in the end of the last century there was a big discussion about the role of industrial clusters in Europe. Cedefop published a book focusing on knowledge, education and training and clusters for a European US conference held in Akron.

I've finally managed to find a digital copy of the book and will summarise some of the ideas. But a big question for me is if and how policies at a national and regional level can support the development of regional industrial clusters in Europe and what impact this might have in developing knowledge in key sectors including technology and AI. What can we do to make such knowledge clusters happen?

Social generative AI for education

Ariyana Ahmad & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0

I am very impressed with a paper, Towards social generative AI for education: theory, practices and ethics, by Mike Sharples. Here is a quick summary but I recommend to read the entire article.

In his paper, Mike Sharples explores the evolving landscape of generative AI in education by discussing different AI system approaches. He identifies several potential AI types that could transform learning interactions: generative AIs that act as possibility generators, argumentative opponents, design assistants, exploratory tools, and creative writing collaborators.

The research highlights that current AI systems primarily operate through individual prompt-response interactions. However, Sharples suggests the next significant advancement will be social generative AI capable of engaging in broader, more complex social interactions. This vision requires developing AI with sophisticated capabilities such as setting explicit goals, maintaining long-term memory, building persistent user models, reflecting on outputs, learning from mistakes, and explaining reasoning.

To achieve this, Sharples proposes developing hybrid AI systems that combine neural networks with symbolic AI technologies. These systems would need to integrate technical sophistication with ethical considerations, ensuring respectful engagement by giving learners control over their data and learning processes.

Importantly, the paper emphasizes that human teachers remain fundamental in this distributed system of human-AI interaction. They will continue to serve as conversation initiators, knowledge sources, and nurturing role models whose expertise and human touch cannot be replaced by technology.

The research raises critical philosophical questions about the future of learning: How can generative AI become a truly conversational learning tool? What ethical frameworks should guide these interactions? How do we design AI systems that can engage meaningfully while respecting human expertise?

Mike Sharples concludes by saying that designing new social AI systems for education requires more than fine tuning existing language models for educational purposes.

It requires building GenAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and experience with networked machine learning.

Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”

Do we need specialised AI tools for education and instructional design?

Photo by Amélie Mourichon on Unsplash

In last weeks edition of her newsletter, Philippa Hardman reported on an interesting research project she has undertaken to explore the effectiveness of Large Language Models (LLMs) like ChatGPT, Claude, and Gemini in instructional design. It seems instructional designers are increasingly using LLMs to complete learning design tasks like writing objectives, selecting instructional strategies and creating lesson plans.

The question Hardman set out to explore was: “how well do these generic, all-purpose LLMs handle the nuanced and complex tasks of instructional design? They may be fast, but are AI tools like Claude, ChatGPT, and Gemini actually any good at learning design?” To find this out she set two research question. The first was sound the Theoretical Knowledge of Instructional Design by LLMs and the second to assess their practical application.She then analysed each model’s responses to assess theoretical accuracy, practical feasibility, and alignment between theory and practice.

In her newsletter Hardman gives a detailed account of the outcomes of testing the different models from each of the three LLM providers, But the The headline is that across all generic LLMs, AI is limited in both its theoretical understanding and its practical application of instructional design. The reasons she says is that they lack industry specific knowledge and nuance, they uncritically use outdated concepts and they display a superficial application of theory.

Hardman concludes that “While general-purpose AI models like Claude, ChatGPT, and Gemini offer a degree of assistance for instructional design, their limitations underscore the risks of relying on generic tools in a specialised field like instructional design.”

She goes on to point out that in industries like coding and medicine, similar risks have led to the emergence of fine-tuned AI copilots, such Cursor for coders and Hippocratic AI for medics and sees a need for “similar specialised AI tools tailored to the nuances of instructional design principles, practices and processes.”