AI: education and learning are not the same thing

Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

As the debate rolls on about the use of AI in education,we seem stuck on previous paradigms abut how technology can be used to support the existing education system rather than thing about AI and learning. Bill Gates said last week "The dream that you could have a tutor who’s always available to you, and understands how to motivate you, what your level of knowledge is, this software should give us that. When you’re outside the classroom, that personal tutor is helping you out, encouraging you, and then your teacher, you know, talks to the personal tutor." This can be seen in the release of Apps designed to make the system run more efficiently and support teachers in producing lesson plans, reduce administration etc. And for learners a swath of tutor apps and agents to help navigate the way through to support skills and knowledge development.

But writing about the popular educational exercise of future forecasting in the European Journal of Education in 2022, Neil Selwyn outlined five broad areas of contention that merit closer attention in future discussion and decision-making. These include, he said:

(1) "taking care to focus on issues relating to 'actually existing' AI rather than the overselling of speculative AI technologies;

(2) clearly foregrounding the limitations of AI in terms of modelling social contexts, and simulating human intelligence, reckoning, autonomy and emotions;

(3) foregrounding the social harms associated with AI use;

(4) acknowledging the value-driven nature of claims around AI; and

(5) paying closer attention to the environmental and ecological sustainability of continued AI development and implementation."

In a recent presentation, Rethinking Education, rather than predicting the future of technology in education, Ilkka Tuomi reconsiders the purpose of AI in education which he says "changes knowledge production and use. This has implications for education, research, innovation, politics, and culture. Current educational institutions are answers to industrial-age historical needs."

EdTech he says, has conflated education and learning but they are not the same thing. He quotes Biesta(2015 who said "education is not designed so that children and young people might learn –people can learn anywhere and do not really need education for it –but so that they might learn particular things, for particular reasons, supported by particular (educational) relationships.” (Biesta, 2015)

He goes on to quote Arendt (2061) who said “Normally the child is first introduced to the world in school. Now school is by no means the world and must not pretend to be; it is rather the institution that we interpose between the private domain of home and the world in order to make the transition from the family to the world possible at all. Attendance there is required not by the family but by the state, that is by the public world, and so, in relation to the child, school in a sense represents the world, although it is not yet actually the world.”

Education 4.0 he says is supposedly about “Preparing children for the demands of the future. "Education becomes a skill-production machine." Yet "Skills are typically reflections of existing technology that is used in productive practice and "Skills change when technology changes." Tumomi notes "There are now 13 393 skills listed in the European Skills, Competences, and Occupations taxonomy."

Digital skills are special, he says "because the computer is a multi-purpose tool" and "AI skills are even more special, because they interact with human cognition."

Social and emotional “skills” rank-order people“. "'21st century skills' are strongly linked to human personality, which, by definition, is stable across the life-span and People can be sorted based on, e.g., “openness to experience,” “conscientiousness,” “agreeableness,” “verbal ability,” “complex problem-solving skills,” etc."

Their position is these list doesn’t change in education and "Instead, training and technology potentially increase existing differences.|"

Tuomi draws attention to the the three social functions of education:

  • "Enculturation: Becoming a member of the adult world, community of practice, or thought community
  • Development of human agency: Becoming a competent participant in social and material worlds with the capability to transform them
  • Reproduction of social structures: Maintaining social continuity; social stratification through qualification and social filtering'
  • AI in education supports Enculturation through:
  • "AI for knowledge transfer and mastery
  • Development of human agency
  • AI for augmentation of agency
  • Reproduction of social structures
  • AI for prediction and classification (drop-out / at-risk, high-stakes assessment)Incentives and motives in HE."

But while "Students used to be proud to be on their way into becoming respected experts and professionals in the society which For many families, this required sacrifice they are now facing LLMs that know everything." Why, he asks "should you waste your time in becoming an expert in a world, where the answers and explanations become near zero-cost commodities?" What happens to HE, he ask, "when AI erodes the epistemic function of education? The traditional focus of AI&ED in accelerating learning and achieving mastery of specific knowledge topics is not sustainable"

His proposal is that "The only sustainable advantage for primary and secondary education, will be a focus on the development of human agency. Agency is augmented by technology. Agency is culturally embedded and relies on social collaboration and coordination. Affect and emotion are important and the epistemic function will be increasingly seen from the point of view of cognitive development (not knowledge acquisition). Qualification has already lost importance as the network makes history visible. It still is important for social stratification (in many countries)."

He concludes by reiterating that "Education is a social institution. It should not be conflated with 'learning'. AI vendors typically reinterpret education as learning. Education becomes “personalized” and “individualized,” and the objective changes to fast acquisition of economically useful skills and knowledge. The vendors are looking for education under the lamp-post, but this lamp-post is something they themselves have set up. Very little to do with education."

AI in ED: Equity, XAI and learner agency

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

The AI in education theme continues to gather momentum, resulting in a non stop stream of journal; articles, reports, newsletters. blogs and videos. However, while not diminishing, there seem to be some subtle change in directions in the messages.

Firstly, despite many schools wary of Generative AI, there is a growing realisation that students are going to use it anyway and that the various apps claiming to check student work for AI simply don't work.

At the same time, there is an increasing focus on AI and pedagogy (perhaps linked to the increasing sophistication of Frontier Models from Gen AI but also the realisation that gimmicks like talking to an AI pretending to be someone famous from the past are just lame!). This increased focus on pedagogy is also leading to pressure to involve students. in the application of Gen AI for teaching and learning. And at recent students two ethical questions have emerged. The first is unequal access to AI applications and tools. Inside Higher Ed reports that recent research from the Public Policy Institute of California on disparate access to digital devices and the internet for K-12 students in the nation’s largest state public-school system. Put simply, they say. students who are already at an educational and digital disadvantage because of family income and first-generation constraints are becoming even more so every day as their peers embrace AI at high rates as a productivity tool—and they do not.

And while some tools will remain free, it appears that the most powerful and modern tools will increasingly come at a cost. The U.K. Jisc recently reported that access to a full suite of the most popular generative AI tools and education plug-ins currently available could cost about £1,000 (about $1,275) per year. For many students already accumulating student debt and managing the rising cost of living, paying more than $100 per month for competitive AI tools is simply not viable.

A second issue is motivation and agency for students in using AI tools. It may be that the rush to gamification, inspired by Apps like DuoLingo, is running thin. Perhaps a more subtle and sustained approach is needed to motivate learners. That may increase a focus on learner agency which in turn is being seen as linked to Explainable AI (or XAI for short). Research around Learning Analytics has pointed to the importance of students understanding the use purpose of LA but also being able to understand why the Analytics turns out as it does. And research into Personal Learning Environments has long shown the importance of learner agency in developing meta-cognition in learning. With the development of many applications for personalized learning programmes, it becomes important that learners are able to understand the reasons for their own individual learning pathways and if necessary challenge them.

While earlier debates about AI in Ed ethics, largely focused on technologies, the new debates are more focused on practices in teaching and learning using AI.

GenAI and Assessment

As a recent publication from the Universitat Oberta de Catalunya points out, Artificial Intelligence remains an opportunity (or an excuse) to transform assessment, curriculum, teaching, personalization and teaching competencies. This is especially so in relation to assessment with widespread concern in the academic world about the near impossibility of detecting whether or not a student has used generative AI in an assignment.

The Universitat Oberta de Catalunya article explores the potential of continuous assessment aimed at self-regulation of learning. It suggests changing the assessment approach, moving from criteria focused on the assessment of the result to criteria focused on the process of development of the activity by the students.

    Furthermore it advocates designing continuous assessment activities as part of the same learning sequence, with relationships of dependency and complementarity, instead of discrete tests and focusing the activities on the development of competencies and the assessment of progress and reflection on the learning process of each student.

    Leon Furze is a prolific contributor to LinkedIn and describes his work as "Guiding educators through the practical and ethical implications of GenAI. Consultant & Author | PhD Candidate."

    Witting from the perspective of education in Australia he says:

    When it comes to GenAI, much of the conversation in education has been focused on academic achievement, perceived threats to academic integrity, and the risk that this technology poses to written assessments. I think that vocational education actually offers some fantastic alternative forms of assessment which are less vulnerable to generative artificial intelligence. If you’re not familiar with vocational education, assessments are often incredibly rigorous, sometimes to the point where the paperwork on evaluation and assessment is significantly longer than the assessment itself.

    Vocational training, by nature, is practical and geared around skills which are needed for the particular job role or discipline being studied. Mainstream education, by contrast, is focused predominately on subjects and content.

    Furze provides examples of different types of assessment in vocational educati9n and training:

    • Observation checklists
    • Role plays
    • Scenarios
    • Workplace activities
    • Reports from employers

    He has prublished a free 60 page ebook - Rethinking Assessment for GenAI which he says covers everything from ways to update assessments, to the reasons I advise against AI detection tools. 

    Designing new social AI systems for education

    UNESCO-UNEVOC/Ludi Yana under CC BY-NC-SA 4.0 IGO

    Very much like the conclusion to Mike Sharples paper, 'Towards social generative AI for education: theory, practices and ethics':

    Designing new social AI systems for education requires more than fine tuning existing language models for educational purposes. It requires building GAI to follow fundamental human rights, respect the expertise of teachers and care for the diversity and development of students. This work should be a partnership of experts in neural and symbolic AI working alongside experts in pedagogy and the science of learning, to design models founded on best principles of collaborative and conversational learning, engaging with teachers and education practitioners to test, critique and deploy them. The result could be a new online space for educational dialogue and exploration that merges human empathy and
    experience with networked machine learning.

    Is AI just another tool, or does it redefine the essence of competence itself?

    This is the second of our interviews with experts on AI in education for the AI Pioneers project. Thr interview is with Ilkka Tuomi. Ilkka Tuomi is the Founder and Chief Scientist at Meaning Processing Ltd, an independent public research organization located in Helsinki, Finland. He previously worked at the European Commission's Joint Research Centre (JRC), Institute for Prospective Technological Studies, Seville, Spain. In 2020 he produced a background report for the European Parliament on the 'The use of Artificial Intelligence (AI) in education' and has recently produced a study 'On the Futures of technology in Education: Emerging Trends and Policy Implications' published as a JRC Science for Policy Report. He is writing and commenting regularly on AI on LinkedIn.

    [Q1] Can you tell us about the motivation behind your recent publication for the EC Joint Research Centre and the future of technologies in learning?

    [A1] My recent publication for the JRC was motivated by my curiosity about the future of learning and the rapidly changing technology landscape. I began by asking which technologies would be essential for policy considerations over the next decade. From this, I compiled a list of technologies that seemed promising for initial discussions. In the process, it became clear that a fundamentally new infrastructure for knowing and learning is emerging. We call this “the Next Internet” in the report. My goal was to both initiate a conversation and delve into the connections between these emerging technologies and new educational models. More broadly, I was interested in how these advancements might transform the education system itself. An essential part of my research also revolved around the evolving dynamics of knowledge production and the importance of innovation in knowledge society, and the implications this has for education. For instance, about the emerging sixth-generation networks offer intriguing sociological and cognitive perspectives, and even on the impact of AI on learning.

    [Q2] How do new cognitive tools influence our understanding of learning?

    [A2] These cognitive tools aren't just emerging as solutions to automate current practices. They delve much deeper, challenging our very understanding of what learning means and how it occurs. My perspective on this is shaped by my background in both AI and learning theory. I approach this topic from both a sociological viewpoint and in terms of how digital transformations impact society as a whole.

    [Q3] Could you share some of your background and experiences in the field of AI?

    [A3] When I was younger, I was deeply involved in neural networks research and even co-authored a book on the philosophy of AI back in 1989. Around this time, I joined the Nokia Research Center. Initially, I worked with knowledge-based systems and expert systems, in other words the good-old-fashioned AI. Over time, I transitioned towards human-computer mediated interaction and knowledge management. The latter is, of course, very much about learning and knowledge creation. While the buzz around AI is louder than ever today, I find a dearth of profound discussions on the topic. There's a pressing need for a deeper, more thoughtful debate.

    [Q4] What impact do you foresee AI having on vocational education?

    [A4] AI's impact on vocational education is twofold. Firstly, we're still uncertain about how AI will reshape vocations and the job market. However, it's evident that the essence of vocational training is undergoing change. Technologies, especially generative AI and other machine learning methodologies, will dramatically influence occupational structures and content. This will inevitably change what people learn. Much of what's taught in vocational schools today might become obsolete or require significant modifications. Many educators are concerned that the skills and knowledge they impart today may become irrelevant in just five years. On the other hand, AI will also change how we learn.

    [Q5] How can these technologies be integrated into the educational process?

    [A5] These technologies offer immense potential for educational applications. Already, there are tools that enable a generative AI system to process, for instance, technical handbooks and repair manuals. With this knowledge, the AI can then answer domain-specific queries, providing up-to-date information about tools and technologies on demand. Consider a trainee in the construction industry; they could access building schematics through AI without having to study them exhaustively. Multimodal AI interfaces could allow them to photograph an unfamiliar object and get guidance on its use. Such an application can be used in fields like automotive repair, where a mechanic can photograph a fault and receive advice on necessary parts and repair procedures. These tools not only aid in teaching but can also be directly implemented in professional settings. Such applications particularly resonate with vocational education, transforming the very core of professional knowledge and identity.

    In today's rapidly evolving digital age, vocational education stands at a unique crossroads. At its core, vocational education is profoundly hands-on and concrete, focusing not on abstract knowledge but on tangible skills and real-world applications. It's about doing, making, and creating. And this is where multimodal Generative AI now comes into play.

    Generative AI has the potential to integrate the concrete world with the abstract realm of digital information. Real-world objects and practical training exercises can be complemented by augmented and virtual reality environments powered by AI. We're on the brink of a transformative shift where AI will not just assist but redefine vocational training.

    Furthermore, the economic implications of AI in this sphere are revolutionary. In the past, creating detailed digital representations of complex machinery, like airplanes, was a costly and time-consuming endeavor. Now, with Generative AI, these models can be produced with increased efficiency and reduced costs. Whether it's for pilot training or for a mechanic understanding an engine's intricate details, AI radically simplifies and economizes the process.

    [Q6] Do we need to redefine what we mean by competence?

    [A6] Traditionally, competence has been perceived as an individual's capability to perform tasks and achieve goals. It's often broken down into knowledge, skills, and attitudes. Education has historically focused on what I have called the epistemic competence components. The move towards “21st century skills and competences” is fundamentally about a shift towards behavioral competence components that include aptitudes, motives, and personality traits ranging from creativity to social capabilities.

    However, an essential nuance often overlooked in our understanding of competence is the external environment. For instance, a highly skilled brain surgeon is only as competent as the tools and infrastructure available to him. It's not just about what resides in the individual's mind but also about the societal structures, technological tools, and the overarching environment in which they operate.

    Reflecting on education and technology, the narrative becomes even more intricate. An educator's competence cannot be solely gauged by their ability to use digital tools. The broader context—whether a school has the required digital infrastructure or the societal norms and regulations around technology use—plays a pivotal role. Emphasizing technology for technology's sake can sometimes be counterproductive. The question arises: is AI just another tool, or does it redefine the essence of competence itself?

    [Q7] What are the major challenges of AI?

    [A7] Looking back, one can find parallels in the challenges faced by earlier technological innovations. My experience in the 1990s at Nokia serves as a poignant example. While AI was once viewed as a magic bullet solution, it soon became evident that the challenges in organizations were as much social as they were technological.

    Communication is the heart of learning and innovation. It's not merely about making the right decisions or processing vast amounts of data. Instead, it's about the rich tapestry of human interactions that shape ideas, beliefs, and knowledge. The introduction of new technologies often disrupts existing knowledge structures and requires substantial social adaptation. The process, thus, becomes more about managing change and facilitating communication.

    [IT1] [IT2] [Q8] What are the implications of AI for Agency

    [A8] Humans have always externalized specific cognitive tasks to tools and technologies around them. In this light, AI doesn't stand as a looming threat but a natural progression, a tool that could enhance human cognition beyond our current boundaries. But AI is also different. Its increasing human-like interactivity and capabilities challenge our traditional, anthropocentric views on agency. In fact, one key message in our JRC report was that we need to understand better how agency is distributed in learning processes when AI is used.

    Innovations like AI don't just supplement our existing reality—they redefine it. Grasping this intricate dance between societal evolution and our shifting reality is essential to fathom AI's transformative potential.

    [Q39 How will AI shape the future of Education?

    [A9] AI's purpose in education should be to enhance human capabilities. This enhancement isn't limited to just individual's cognitive functions; it spans the social and behavioral realms too. In contrast to the post-industrial era, when computers were increasingly used to automate manual and knowledge work, AI and the emerging next Internet are now fusing the material world and its digital representations into an actionable reality. This is something we have not seen before. The material basis of social and cultural production is changing. As a result, the nature of knowing is changing as well. My claim has been that, in such a world, education must reconceptualize its social objectives and functions. The development of human agency might well be the fundamental objective of education in this emerging world. We need to learn, not only how to do things, but also what to do and why. This may, of course, also require rethinking the futures of vocational education and training.