Is Emergence a mirage

Nobel Prize-winning physicist P.W. Anderson’s “More Is Different” argues that as the complexity of a system increases, new properties may materialize that cannot (easily or at all) be predicted, even from a precise quantitative under-standing of the system’s microscopic details. As Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo from Stanford University explain in a recently published paper "Emergence has recently gained significant attention in machine learning due to observations that large language models, e.g., GPT, PaLM, LaMDA can exhibit so-called “emergent abilities” across diverse tasks." It has been argued that large language models display emergent abilities not present in smaller-scale models, justifying the huge financial and environmental cost of developing these models.

Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo "present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale."

Their paper, Are Emergent Abilities of Large Language Models a Mirage?, is quite technical but very well written and important for understanding the debate around AI.

#AIinEd – Pontydysgu EU 2023-05-08 12:45:32

As the trailer says: "In this video, you will witness a fascinating discussion between Socrates, the Greek philosopher considered one of the greatest thinkers in history, and Bill Gates, the American entrepreneur and founder of Microsoft, one of the most important companies in the world of technology. Despite belonging to different eras, Socrates and Gates have a lot in common. Both are considered pioneers in their respective fields and have had a significant impact on society. It is interesting that the 'conversation; centres on the benefits (or not) of AI for education and learning.

Good critical and sceptical work on AI in education

I've commented before on the depth of division in commentary and research on the use of AI in education since the release of ChatGPT and subsequent applications based on Large Language Models. As the MIT Technology review has reported, "Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network" and "by January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia." But then continued, "many teachers now believe, ChatGPT could actually help make education better.

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more."

But rather than take sides in a polarised debate. Ben Williamson, who researches and writes about education, digital tech, data and policy at the University of Edinburgh, believes we need to develop "Good critical and sceptical work on AI in education.' In a series of toots (the Mastodon nomenclature for Tweets) on the Mastodon social network put forward the following ideas for research into AI in education.

  1. Is AI in education really doing what it claims? Do LLM-enabled chatbots improve learning? Do personalized learning algorithms actually personalize, or just cluster by historical patterns? Is it even "AI" or just some shitty stats?
  2. What's the political economy of AI in education? Even if LLM chatbots in EdTech are great, how does that link with wider digital economy developments? What policy enablers are in place to facilitate AI in education? What policy-influencing networks are forming around AIED? Why does it get so much funding, in which geographical regions, and from which sources?
  3. What's the science behind AI in education? AI and education have a 60-year history, taking in cybernetics, cognitivism and computing, then learning science, learning analytics, and education data science, with doses of behaviourism and nudge theory along the way, and now machine learning and neural networks - this is a hefty accumulation demanding much better understanding.
  4. What kind of infrastructuring of education does AI in education require? You put LLMs into EdTech vis APIs then you are building on an infrastructure stack to run your platform. That puts schools on the stack too. What are the implications, long-term, of these Big Tech lock-ins? Will schools be governed not just by EdTech but by Big Tech AI vendors and their APIs?
  5. What are the rights, justice, ethics and regulatory implications of AI in education? Can EdTech be designed for justice? Could algorithms be repurposed for reparative projects rather than discriminatory outcomes? Have AIED ethics frameworks been compromised? Is there scope for more democratic participation in building AI for education products? Can we be hopeful of better things from this technically remarkable but socially troubling tech?

"Just some thoughts to work on…", he concluded. These seem a pretty good starting point, not just for Higher Education, but for those of working on AI and Vocational Education and Training and in Adult Education, as we are doing in the European AI PIoneers Project.