I seem to be increasingly subscribing to (and reading) newsletters. For AI, by far the best I have found is The Algorithmic Bridge by Alberto Romero. In the latest edition where he discusses ChatGPT's seeming avoidance of any degree of implausibility, especially in peoples' CVs, he explains his own implausible background as an aerospace engineer who went on to work for an AI startup and then study cognitive neuroscience to end up writing on the internet.
Earlier in the newsletter he quotes the work of Shannon Vallor, a philosopher at the University of Edinburgh whose research is focused on “the philosophy and ethics of emerging science and technologies,” particularly AI.
"I vividly recall reading Vallor’s insights", he says. "They influenced my later perspectives on AI and language models. Here’s, in my opinion, the most illuminating excerpt from her essay “GPT-3 and the Missing Labor of Understanding”:
“Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory; a tool that endlessly simulates meaning anew from a pool of data untethered to its previous efforts. In contrast, understanding is a lifelong social labor. It’s a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world.”
Love this framing. The way it emphasizes the social and cultural dimensions of human understanding. It departs from the typical “AI models can’t understand because they don’t have a world model” or “because they can’t access the meaning behind the form of the words.” Those are true, too, but this one—understanding as labor we do actively and daily—was refreshing.