The Digital Native Myth: A Story of Evolution

Remember when people started talking about "digital natives" back in 2001? It was a catchy term for kids growing up surrounded by tech and the internet. The specific terms "digital native" and "digital immigrant" were popularized by education consultant Marc Prensky in his 2001 article entitled Digital Natives, Digital Immigrants, in which he relates the contemporary decline in American education to educators' failure to understand the needs of modern students. His article posited that "the arrival and rapid dissemination of digital technology in the last decade of the 20th century" had changed the way students think and process information, making it difficult for them to excel academically using the outdated teaching methods of the day. Prensky's article was not scientific and there was no research or evidence to back up his idea. But despite this, the idea caught on fast, influencing how we approached education and technology.

Researchers dug deeper and found no real evidence that an entire generation was thinking differently. You'd think that would be the end of it, right? Surprisingly, the digital native idea is still kicking around in the media and education circles. Yet, the digital natives narrative persists in popular media and the education discourse. A new study set out to investigate the reasons for the persistence of the digital native myth. It analyzed the metadata from 1886 articles related to the term between 2001 and 2022 using bibliometric methods and structural topic modeling. The results show that the concept of “digital native” is still both warmly embraced and fiercely criticized by scholars mostly from western and high income countries, and the volume of research on the topic is growing. However, interestingly the results suggest that what appears as the persistence of the idea is actually evolution and complete reinvention: The way the “digital native” concept is operationalized has shifted over time through a series of (metaphorical) mutations. The concept of digital native is one (albeit a highly successful) mutation of the generational gap discourse dating back to the early 1900s. While the initial digital native literature relied on Prensky's unvalidated claims and waned upon facing empirical challenges, subsequent versions have sought more nuanced interpretations.

The study uncovered 1,886 articles about digital natives, published between 2001 and 2022 with some interesting patterns. The authors say that what we mean by "digital native" has shifted over time. The idea is part of a bigger story and is just one chapter in a long history of talking about generational gaps. Its not going to be long before the idea mutates for those growing up in the age of AI!

Want to find out more? Listen to the podcast above or if you prefer your learning in written form download the paper below.

Mertala, P., López-Pernas, S., Vartiainen, H., Saqr, M., & Tedre, M. (2024). Digital natives in the scientific literature: A topic modelling approach. Computers in Human Behavior, 152, 108076.

AI Procurement: key questions

Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

In the AI pioneers project we are frequently asked by teachers. and. trainers in Vocational Education and Training and Adult Education what they should be looking for if they intend licensing or buying AI based applications. The UK Jisc has developed and published an AI Maturity model. "As institutions move to the ’embedded’ stage," they say "we expect appropriate processes to be in place for the entire lifecycle of AI products, including procurement."

They continue to explain that: "This detailed scrutiny aims to facilitate a better understanding and mitigation of potential risks associated with AI deployment. Additionally, it is crucial to ensure that the use of AI in educational and research settings does not infringe on IP rights and that the data used in AI models is appropriately managed to maintain research integrity and protect proprietary information."

The model includes comprehensive due diligence processes for areas such as supplier information, financial stability, insurance coverage, modern slavery compliance, information security, and data protection. By thoroughly vetting these aspects, JISC says, we aim to ensure that any solutions are not only innovative and effective but also ethical and compliant with all relevant regulations and standards. The questions are intended to be dynamic and will be reviewed to reflect advances in technology or legislation.

1Outline which AI features of your system use third party AI models, and which use your own proprietary or in-house AI models.  Please provide details of any third-party technologies used, including the name of provider and an outline of the features used.  Note that for major suppliers in the LLM supply chain, such as OpenAI, Google DeepMind, Anthropic, etc., due diligence should be conducted separately. There’s no need to request information about them from all third-party providers built on these large language models.
2Where you are either creating your own model or fine tuning a third-party model, how is performance defined and measured? Include details of initial training and monitoring over time.  (UK AI Principle: Safety, security and robustness)
3What data do your AI models require for initial training or fine tuning? If you are using third party models, you should only describe data that is unique to your application.  (UK AI Principle: Safety, security and robustness)
4a/4bIs data from user interactions with the system utilized to enhance model performance, and if so, please elaborate on the mechanisms involved? Furthermore, could you provide clarification on whether institutional data is integrated into external models?  (UK AI Principle: Safety, security and robustness)
5What features does your solution have to make it clear when the user is interacting with an AI tool or AI features?  (UK Principle: Safety, security and robustness)
6Could you please provide comprehensive information about the safety features and protections integrated into your solution to ensure safe and accessible use by all users, including those with accessibility needs and special education requirements?(UK Principle: Safety, security and robustness) 
7Can you specify any special considerations or features tailored for users under the legal majority age?UK Principle: Safety, security and robustness) 
8What explainability features does your AI system provide for in its decisions or recommendations?(UK Principle: Safety, security and robustness) 
9What steps are taken to minimize bias within models your either create or fine tune?(UK Principle: Fairness robustness) 
10Does your company have a public statement on Trustworthy AI or Responsible AI? Please link to it here.(UK Principle: Accountability and governance) 
11/ 11a/ 11b/ 11c Does your solution promote research, organizational or educational use by: A)    Not restricting the use of parts of your solution within AI tools and services B)    Not preventing institutions from making licensed solutions fully accessible to all authorized users in any legal manner; C)    Not introducing new liability on institutions, or require an institution to indemnify you especially in relation to the actions of authorized users (Gartner, Inc, ICOLC statement and legal advice obtained by Jisc)
12Does your solution adequately protect against institutional intellectual property (IP) infringement including scenarios where third parties are given access to and may harvest institutional IP?(Gartner, Inc and ICOLC statement)

Teacher’s Digital Literacy

Nacho Kamenov & Humans in the Loop / Better Images of AI / A trainer instructing a data annotator on how to label images / CC-BY 4.0

This definition of AI literacy for teachers was posted on Linked in by Fenchung Miao, Chief, Unit for Technology and AI in Education at UNESCO.

  1. Cultivate a critical view that AI is human led and the corporate and individual decision of AI creators have profound impact on human autonomy an rights, becoming aware of the importance of human agency when evaluating and using AI tools.
  2. Develop a basic understanding on typical ethical issues related to AI and acquire basic knowledge on ethical principles for human / AI interactions, including protection of human rights and human agency, promotion of linguistic and cultural diversity and advocating for inclusion and environmental sustainability.
  3. Acquire basic conceptual knowledge on AI, including the definition of AI, basic knowledge on how an AI model is trained and associated knowledge on data and algorithm, main categories and examples of AI technologies, as well as basic skills on examining appropriateness of specific AI tools for education and operational skills of validated AI tools.
  4. Identify and leverage the pedagogical benefits of AI tools to support subject specific lesson planning, teaching and assessments.
  5. Explore the useoif AI tools to enhance their professional learning and reflective practices, supporting assessment of learning needs and personal learning pathways in the rapidly evolving educational landscape.

AI Pioneers Action Lab at the EDEN Digital Learning Europe Conference in Graz

Last week at the EDEN Digital Learning Europe Conference in Graz. The AI Pioneers project organised an Action Lab. I wasn't quite sure what the idea of an Action Lab was but we agreed that it seemed to imply some involvement and participation from those attending. Neither did we know exactly how many would attend given that as in other large conferences their were a considerable number of parallel sessions.

So - in the end we decided we would deliver a very short presentation as an introduction to the AI pioneers project and then split into groups to discuss three key questions:

  • The skills and competences required for using Generative AI for teaching and learning
  • The impact of Generative AI on education
  • Guidelines for future projects on using AI for learning in education

In the end nearly 30 people turned up - to a room which was better suited to 15! And we only had one hour for the Action Lab so it was quick fire stuff. Originally we had through about using a Miro board for collecting inputs but there was not enough time for that. Instead we went back to basics and collected the replies on post-it notes displayed on posters on the wall. We captured posters through mobile phone pictures.

The session was lively and a lot of fun (at least I thought so and others I talked to afterwards agreed. Of course a usual problem with a set up like this is how to record and analyse the discussion afterwards. No problems - since this is about AI I gave to to Claude 3.5 Sonnet with the following prompt:

Those stickers are responses in the question about the impact of AI in education from a workshop about AI in education. Can you extract the text from the stickers. Can you analyse and conclude ?

Here's what it came back with.

Skills and competences required for using Generative AI for teaching and learning:

  1. What do you want to achieve? WEIRD prompts
  2. GenAI can help build skills
  3. Critical analysis of outputs + ethics
  4. Domain understanding
  5. Basic computer skills
  6. TPAK (Technological Pedagogical Content Knowledge)
  7. Critical thinking and ethical skills
  8. Design documents and use appropriate AI skills, take notes to AI tools
  9. Critical reflection of output from AI - evaluative judgment
  10. Agility, willingness to explore and change
  11. Prompt design
  12. Level up pedagogical skills
  13. Play with AI, Explore, Experiment, Engage
  14. Learn how to prompt correctly

The impact of Generative AI on education:

  1. Consider how learning is assessed and how we define success
  2. Collaborative learning initiatives
  3. Leveling up of some skills or knowledge / language
  4. Reduction in the value/need for qualifications for jobs
  5. Possible increase of behaviorism-based learning
  6. Effective tool SaaS may no longer exist
  7. Opportunities to extend through AI
  8. Personalization of learning
  9. Amplifier for re-thinking Education - the way we teach
  10. Dropping old skills, developing new skills
  11. Disruption in trust
  12. Impact on our cognitive development
  13. Role in historical narratives + trust + misinformation
  14. Impact on staff workload/training on job
  15. Risks to assessment - cheating
  16. Danger of partial understanding AI as a crutch
  17. NO IMPACT!

Guidelines for future projects on using AI for learning in education:

  1. Research informed practice
  2. Co-design with students/teachers + partners
  3. Authentic Assessment
  4. Ethical Proofing
  5. AI taxonomies
  6. digital poverty
  7. Focus on innovation equity
  8. Preserve the values of education and its expertise
  9. Custom small LLMs for customized content exploration
  10. Include nudges to help people be better/happier/healthier
  11. AI as Co-Teacher + Blended Learning + Flippers
  12. AI Tutor for AI National Network of Hub AI Play fps, Research Financial, Pedagogy for all
  13. Accountability Buddy
  14. Project & support building custom kits / APIs for use cases

Analysis and Conclusion:

The sticky notes reveal a comprehensive exploration of AI's role in education, covering guidelines for implementation, necessary skills, and potential impacts. Here's an analysis of the key themes:

  1. Ethical Considerations: There's a strong emphasis on ethical implementation, including "Ethical Proofing" and preserving educational values. This suggests a cautious approach to AI adoption.
  2. Skill Development: Both educators and students need to develop new skills, such as prompt design, critical analysis of AI outputs, and understanding AI's capabilities and limitations.
  3. Personalization and Collaboration: AI is seen as a tool for personalizing learning experiences and fostering collaborative initiatives.
  4. Pedagogical Shifts: The notes indicate that AI may lead to changes in teaching methods, assessment practices, and the overall educational paradigm.
  5. Equity and Accessibility: There's concern about digital poverty and ensuring equitable access to AI-enhanced education.
  6. Potential Risks: Some notes highlight risks such as over-reliance on AI, impacts on trust, and potential for cheating.
  7. Workforce Implications: AI's impact extends beyond the classroom, potentially affecting job qualifications and workforce skills.

Conclusion: The overall sentiment towards AI in education is cautiously optimistic. While there's excitement about AI's potential to enhance learning experiences and personalize education, there's also a clear recognition of the challenges and risks involved. The focus is on responsible implementation, emphasizing the need for ethical guidelines, skill development, and maintaining human-centric values in education. The diverse perspectives represented in these notes suggest that the integration of AI in education will require careful planning, continuous assessment, and adaptability to ensure it benefits all stakeholders while mitigating potential drawbacks.

Are GenAI codes of ethics dangerous?

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Safety Precautions / CC-BY 4.0

Last week I was at the EDEN Digital Learning Europe Conference in Graz. Sadly difficulties in transport limited by time and I missed the second day of the conference. But I am copying this report on LinkedIn by Felix Kwihangana from the University of Manchester of the EDEN Oxford Debate, where he supported the motion “Codes of Ethics for Generative Artificial Intelligence in Education are Useless/ Toothless/ A waste of time, chaired by Paul Prinsloo.

The debate provided an opportunity to explore and interrogate the complex issues around Ethical guidelines for hashtag#Generative hashtag#AI in education, their merits and demerits and everything in-between. I was privileged to work with Elisabeth Anna Guenther and Dr Martina Plantak in supporting the motion against an impressive team of debaters (Eamon Costello, Victoria Marín and Palitha Edirisingha).

In supporting the motion, we argued that hashtag#GenAI ethical guidelines in HE are often reactive, exclusive of non-western ways of knowing, based on a limited understanding of Generative AI, becoming obsolete before they are enacted due to the speed at which Generative AI is developing, and used as virtue signalling tools by institutions motivated by maintaining control rather than encouraging exploration and discovery. Using some historical cases (Alan Turing prosecution, The Prohibition), we argued that the ever changing values of society and the fast pace of Generative AI development could make Generative AI codes of ethics not only useless but also dangerous, when looked at within the historical lens of damage done in the name of "ethics", "values" and "norms" that societies end up dropping anyway. Needless to say, the opposing team had equally strong counterarguments, which made the debate worth its name!