Of course researchers and practitioners in vocational education and training (VET) and in Adult Education are long used to their secondary status compared to School and Higher Education. And although Learning Analytics has been around for quite some years now, there has been little consideration of its use in VET and in workplace learning.
Is it is good to report that the German Federal Ministry of Education and Research together with the German Federal Institute for Vocational Education and Training have funded a study on artificial intelligence offers benefits for implementing personalised and adaptive learning environments (PALE; Schumacher, 2018). PALE are digital learning systems that continuously analyse and leverage education-related data to adapt the learning environment to individual needs and constantly changing requirements
A major challenge in designing trusted PALE for workplace learning remains the identification of reliable indicators. Indicators are variables (e.g., interests, demographics, location) that reveal useful information about learning behavior and that are processed by specific algorithms to personalize and adapt the learning environment. Reliable indicators are crucial for PALE as accurate and comprehensive information about learners and their contexts is needed to design effective interventions to support learning processes and outcomes.
The research identified three profiles as being central to the collection of data for developing and implementing personalised and adaptive learning environments. These profiles were examined against different perspectives: Pedagogical perspective, ethical perspective , data analysis perspectives and Information perspective.
The results are cautious. “Despite rich datasets and advanced analytics methodologies, not all approaches utilising artificial intelligence in education seem to be effective for workplace learning.” They conclude that so far "no wide-scale organisational implementation of artificial intelligence for workplace learning exists and no empirical evidence is available for supporting the assumption that PALE improve the performance of involved stakeholders and organisations."
However, they suggest “Interdisciplinary perspectives on adoption models as well as on pol icy recommendations may help to move the pioneering efforts on artificial intelligence for workplace learning forward”
References
Schumacher, C. (2018). Supporting informal workplace learning through analytics. In D. Ifenthaler (Ed.), Digital workplace learning: Bridging formal and informal learning with digital tech- nologies (pp. 43–61). Springer. https://doi.org/10.1007/978- 3- 319- 46215- 8
Stumbled on this google doc this morning. It is an open document for people to share their institution's policy on AI. OK - so the organisations all appear to be in the US and its main focus is on Higher Education. But it seems to be going a little viral - there were 43 entries this morning and now there are 56! And it is interesting in that there appear to be wildly different policies between different organisations. Lets give you a flavour of some of them:
Some student work may be submitted to AI or plagiarism detection tools in order to ensure that student work product is human created. The submission of AI generated answers constitutes plagiarism and is a violation of CSCC's student code of conduct.
Columbus State Community College
AI is a tool, just like a pencil or a computer. However, unlike most tools you need to acknowledge using it. Pay close attention to whatever information you use in your own work that is produced from Ai, and explain how/what you used at the end of assignments.
Clemson University
Use of an AI Generator such as ChatGPT, iA Writer, MidJourney, DALL-E, etc. is explicitly prohibited unless otherwise noted by the instructor. The information derived from these tools is based on previously published materials. Therefore, using these tools without proper citation constitutes plagiarism.
Middle Tennessee State University
expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill, and I provide tutorials in Canvas about how to use them. I am happy to meet and help with these tools during office hours or after class.
Wharton School University of Pennsylvania
We will use AI tools that harness large language models, including ChatGPT (and DALL-E 2 among others), as pedagogical opportunities for learning and teaching in the course. Doing so aligns with the course objective on digital literacies (course objective 4) and opens up a class dialogue about the role of AI in education, including opportunities and complexities for teachers' everyday work in facilitating the learning of diverse student cohorts
University of Queensland (UQ), Australia
And so on. It seems to vary quite a bit depending on different subject araes - especially with art and design programmes integrating AI in the curriculum.
It would be fascinating to have a similar page for European education (will suggest this to be colleagues in the AI Pioneers Project)
Probably because it was the first of its kind there is still a tendency to conflate chatbots bases on large Language Models with ChatGPT. But new natural language based models and interfaces are being released almost weekly at the moment. Here is a quick list but I appreciate it is not complete and I have not included the various art and graphic apps here:
The new Bing
Jasper
YouChat
Chatsonic
Socratic
HuggingChat
Bard
Claude
Pi
ChatGPT
They are not all the same and I am not sure one is better that the others. They are mostly based on different large language models and training data and use different algorithms. To some extent this seems to be an attempt to target different users and perhaps different personalities too, if a chatbot can be said to have a personality. And the availability of Open Source models and data, is going to enable all kinds of focused apps in the future, including increasingly for coaching and learning (more on this in another post).
Meanwhile I have been quite enjoying talking with Pi. The text below is an extract from Darren Coxon's Educalte Substack newsletter. Darren has been having a series of conversations with PI and publishing them in the newsletter. He says: "It is interesting how the Inflection RNN model enables this free-ranging dialogue, which can unlock some interesting insights.
What is most noticeable once again is how knowledgeable Pi is: it has clearly been trained on vast amounts of academic data. There seems little it doesn’t know, and to a very advanced level."
Darren: Do you think that an AI could ever become conscious? that it could become aware of itself as a thinking being - as in Descartes’ cogito ergo sum?
Pi: Ah, the famous "I think, therefore I am." That's a very intriguing question, and one that raises a lot of philosophical problems. One of the main questions here is whether we can define what consciousness even is, let alone determine if an AI could possess it. What about this question - can you define consciousness? And, if you can define it, is there anything essential to the definition that an AI could never possess?
Darren: That's a tough question. Consciousness is being aware of being aware. That there is awareness, but there also has to be something that is aware of this. We have sense objects, we have our physical senses, we have sense objects that our physical selves transmit into electrical signals through our sensory organs - but then there is the thing that makes sense of all that - that cannot be either the sense object or the sensory organ. Is that consciousness?
Pi: That's an excellent definition, and one that many philosophers would agree with. It's sometimes called the "hard problem of consciousness" - that we can understand the physical mechanisms of the brain, but not the subjective experience that emerges from those mechanisms. Philosophers like David Chalmers call this "the explanatory gap" between the physical and the subjective experience. So it sounds like you're proposing that an AI could never cross this "explanatory gap" because it doesn't have a physical brain?
UNESCO has published the 2023 edition of the The Global Education Monitoring Report (GEM) entitled "technology in education: a tool on whose terms?" The report is an independent annual publication. The funded by a group of governments,multilateral agencies and private foundations and facilitated and supported by UNESCO. It comes in at a weighty 418 pages with much of interest (I have yo admit I have only read the 32 page summary.
Unusually for a report of this type, it received considerable media attention, at least in the UK. But this was focused on the section of the use of smartphones in school which concluded that "Mere proximity to a mobile device was found to distract students and to have a negative impact on learning in 14 countries, yet less than one in four have banned smartphone use in schools."
The report says that "good, impartial evidence on the impact of education technology is in short supply" adding that here is little robust evidence on digital technology’s added value in education. Technology evolves faster than it is possible to evaluate it: Education technology products change every 36 months, on average. Most evidence comes from the richest countries. In the United Kingdom, 7% of education technology companies had conducted randomized controlled trials, and 12% had used third-party certification. A survey of teachers and administrators in 17 US states showed that only 11% requested peer-reviewed evidence prior to adoption. A lot of the evidence comes from those trying to sell it. Pearson funded its own studies, contesting independent analysis that showed its products had no impact.
The 2018 PISA found that 65% of 15-year-old students in OECD countries were in schools whose principals agreed that teachers had the technical and pedagogical skills to integrate digital devices in instruction and 54% in schools where an effective online learning support platform was available; these shares are believed to have increased during the COVID-19 pandemic. Despite this the report found that teachers often feel unprepared and lack confidence teaching with technology. Only half of countries have standards for developing teacher ICT skills. While 5% of ransomware attacks target education, few teacher training programmes cover cybersecurity.
The report found that online content has grown without enough regulation of quality control or diversity. Online content is produced by dominant groups, affecting access to it. Nearly 90% of content in higher education repositories with open education resource collections was created in Europe and Northern America; 92% of content in the OER Commons global library is in English. Massive open online courses (MOOCs) mainly benefit educated learners and those from richer countries.
Higher education is adopting digital technology the fastest and being transformed by it the most. There were over 220 million students attending MOOCs in 2021. But digital platforms challenge universities’ role and pose regulatory and ethical challenges, for instance related to exclusive subscription deals and to student and personnel data.
While such technology has tremendous potential, many tools have not been designed for application to education. Not enough attention has been given to how they are applied in education and even less to how they should be applied in different education contexts.
To understand the discourse around education technology, it is necessary to look behind the language being used to promote it, and the interests it serves. Who frames the problems technology should address? What are the consequences of such framing for education? Who promotes education technology as a precondition for education transformation? How credible are such claims? What criteria and standards need to be set to evaluate digital technology's current and potential future contribution to education so as to separate hype from substance? Can evaluation go beyond short-term assessments of impact on learning and capture potential far-reaching consequences of the generalized use of digital technology in education?
Exaggerated claims about technology go hand in hand with exaggerated estimates of its global market size. In 2022, business intelligence providers’ estimates ranged from USD 123 billion to USD 300 billion. These accounts are almost always projected forward, predicting optimistic expansion, yet they fail to give historic trends and verify whether past projections proved true. Such reporting routinely characterizes education technology as essential and technology companies as enablers and disruptors. If optimistic projections are not fulfilled, responsibility is implicitly placed on governments as a way of maintaining indirect pressure on them to increase procurement. Education is criticized as being slow to change, stuck in the past and a laggard when it comes to innovation. Such coverage plays on users’ fascination with novelty but also their fear of being left behind.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.