『Many Minds』のカバーアート

Many Minds

Many Minds

著者: Kensy Cooperrider – Diverse Intelligences Summer Institute
無料で聴く

概要

Our world is brimming with beings—human, animal, and artificial. We explore how they think, sense, feel, and learn. Conversations and more, every two weeks.Diverse Intelligences Summer Institute 2020-2025 科学
エピソード
  • What can AI teach us about the mind?
    2026/03/26
    Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science? My guests today are Dr. Mike Frank and Dr. Gary Lupyan. Mike is a Professor of Psychology at Stanford University, where his lab focuses on language learning and cognition in children. Gary is a Professor of Psychology at the University of Wisconsin Madison, where his lab studies language and its role in augmenting human cognition. Both Gary and Mike have more recently been thinking a lot about AI and how it is challenging and deepening our understanding of the human mind. In this conversation, we talk about being interested in AI as cognitive scientists—while also being concerned about the technology as people. We discuss the linguistic abilities of frontier LLMs compared to the linguistic abilities of adult humans. We talk about a glaring "data gap" here—the fact that, even though LLMs often rival human abilities, they require orders of magnitude more data to do so. We contrast the capabilities of large language models with so-called BabyLMs. We consider the fact that, as LLMs master language, they also master other abilities—capacities for mathematical reasoning, causal understanding, possibly theory of mind, and more. And we talk about why language might be an especially potent form of input for an AI. Along the way, we touch on reference and the symbol grounding problem, the Platonic Representation Hypothesis, stimulus computability, confabulated citations, pattern matching and jabberwocky, the poverty of the stimulus argument, congenital blindness, Quine's topiary, the limits of in principle demonstrations, the WEIRD problem, and what the astonishing sophistication of disembodied AIs might suggest about the role of bodily experience in human cognition. Before we get to it, one small request: we're currently running a short survey of our listeners. You can find the link in our show notes. If you have a few minutes, we'd really love your input! Alright friends, here's my conversation with Mike Frank and Gary Lupyan. I think you'll enjoy it! Notes 5:00 – For more discussion of "stochastic parrots" and other ways of framing AI systems, see our recent episode with Melanie Mitchell. For the "octopus test," see here. 8:00 – "BabyLMs" are—in contrast to large LMs (aka LLMs)—models that are trained on a more human-scale amount of linguistic input. For more on the BabyLM community, see here. 12:00 – For broad discussion of the use of AIs as "cognitive models," see this paper by Dr. Frank and a colleague. The same paper discusses the idea of "stimulus computability." 18:00 – For Dr. Frank's "baby steps" paper, see here. 20:00 – For more on how Claude understands line breaks, see Anthropic's analysis of the issue here. 23:00 – For work on human-like grammaticality judgments in LLMs, see this paper by a team including Dr. Lupyan. 24:00 – See here for an influential paper on, among other things, how LLMs refute the idea that syntax is unlearnable. The article titled 'How linguistics learned to stop worrying and love the language models' is here; Dr. Lupyan's commentary—'Large language models have learned to use language'—here. 29:00 – For some of Dr. Lupyan's work on the "abstractness" of even concrete concepts, see here. 35:00 – For a classic paper on the so-called symbol grounding problem, see here. 37:00 – For the preprint putting forth the "Platonic Representation Hypothesis," see here. 40:30 – For more on the data gap between children and LLMs—and what accounts for it—see Dr. Frank's paper here. 45:00 – For a sampling of Dr. Frank and colleagues' work comparing language models to children, see here, here, and here. For more on the LEVANTE project, a collaborative effort spearheaded by Dr. Frank, see here. 48:00 – For the preprint—'The Unreasonable Effectiveness of Pattern Matching,' by Dr. Lupyan and a colleague—see here. 55:00 – For more on Dr. Lupyan's perspective on the centrality of language in human cognition, see here. See also this more recent paper, considering the question in light of LLMs. 58:00 – For our earlier episode with Dr. Marina Bedny, see here. For the recent paper by Dr. Bedny and colleagues considering their research on congenital blindness in light of LLMs, see here. 1:01:00 – For classic work on language learning in blind children, see here. 1:02:00 – For a paper by Dr. Lupyan and colleagues on "hidden" individual differences, see here. 1:03:00 – For more on "multiple realizability," see here. For our earlier episode with ...
    続きを読む 一部表示
    1 時間 21 分
  • Mutualisms all the way down
    2026/03/11
    No one is an island. We all depend on each other in critical, often tangled ways. And when I say "we" and "each other" I don't just mean humans. Yes, we humans rely on other humans. But we also rely on bees, yeasts, dogs, bacteria, and countless other creatures big and small. These interspecies dependencies—or mutualisms, as biologists call them—have deflected and inflected our history. And there's no doubt they will also inflect our future. My guest today is Dr. Rob Dunn. Rob is Professor of Applied Ecology at North Carolina State University, where he studies the creatures and ecologies all around us—in our homes, in our foods, in our belly buttons. He's the author of eight books, including, most recently, The Call of the Honeyguide: What Science Tells Us about How to Live Well with the Rest of Life. This book is the focus of our conversation today. Rob and I talk about the idea of mutualism—in which two or more species benefit each other—and how human life is sustained by mutualisms all the way down. We consider how the benefits of mutualism are measured—whether in terms of biological fitness, or longevity, or pleasure. We talk about the best-documented cases of humans collaborating with other species to find honey or hunt fish. We consider how our liaisons with yeasts have shaped human history—and how we might even say that yeasts domesticated us. We linger on our relationships with dogs and cats and the benefits we get from them, some obvious and some less so. Finally, we talk about what it would mean to more fully embrace our mutualisms, what it would mean to create what Rob calls "a less lonely future." Along the way, Rob and I talk about cheese, worms, and maggots; bread, beer, and honey; face mites and armpits; parasites, inquilines, and commensals; what sauerkraut does to our immune systems; honeyguides and dolphins, leopards and house cats; morbid curiosity; and how dogs might give us a kind of access to our subconscious. This is a fun one folks. But, before we get to it, a couple of announcements. First: Applications are now open for the 2026 Diverse Intelligences Summer Institute. This is a three-week intensive, transdisciplinary exploration of the different forms of mind and intelligence that animate our world. If you like the themes we talk about on this show, you would almost certainly get a kick out of DISI. More info at www.disi.org. That's d-i-s-i. org. Review of applications begins pretty soon, so don't dither! Second: We have just put out our first ever Many Minds audience survey! Whether you're longtime superfan or just an occasional listener, we would love to hear from you. Your input will help guide the show as we consider our next chapter. Alright, friends—without further ado, on to my conversation with Rob Dunn. Enjoy! Notes 4:00 – For the fuller story of Menocchio, see The Cheese and the Worms, by Carlo Ginzburg. 7:00 – Dr. Dunn's lab has been involved in public-facing projects about fermented foods—see here for a series of webinars. 10:00 – The Sardinian cheese we discuss is called casu martzu. 14:00 – A study by Dr. Dunn and colleagues about human face mites. This is not the only aspect of bodily geography he and colleagues have examined: see also this study of the organisms in our belly buttons. 18:30 – For a primer on honeyguide birds, see here. 21:30 – For more on the calls humans use to communicate with honeyguides, see here. 24:30 – For more on human-dolphin collaborative hunting, see this recent study. 27:30 – For more about the theologian Aminah Al-Attas Bradford, a researcher Dr. Dunn's lab, see here. 33:00 – We also discussed fermentation at length in an earlier episode here. 35:00 – A study by Dr. Dunn and colleagues on the microbial composition of sourdough starters. 37:00 – For more on our—and other animals'—relationships with alcohol, see our earlier episode. 40:00 – A study by Dr. Dunn and colleagues on the evolution of sour taste in humans. 42:00 – For more on the domestication of chickens, see here. 49:00 – For more on the concept of "morbid curiosity," see here. 55:00 – For more on our armpits—and the bacterial communities we harbor therein—see this study by Dr. Dunn and colleagues. 1:04:00 – The study by Dr. Dunn and colleagues about the spiders in people's homes. The spider poem by Kobayashi Issa. Recommendations An Immense World and I Contain Multitudes, by Ed Yong (former guest!) Stories of by Anton Chekhov Poems by Kobayashi Issa Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Subscribe to Many Minds ...
    続きを読む 一部表示
    1 時間 9 分
  • Seven metaphors for AI
    2026/02/26
    If you wanted a petri dish for understanding metaphors—how they emerge and evolve and jostle with each other—it would be hard to do better than the world of AI. We talk about AI systems variously as coaches or co-pilots, little genies or alien intelligences. Some researchers claim that AIs "grow," that they're entering their phase of "adolescence." Critics deride AI products as slop and dismiss LLMs as a kind of autocomplete on steroids. What's behind these different characterizations? Which ones are accurate and which are unfair? And are our metaphors mostly colorful rhetoric or do they matter? Are they shaping how we understand, adopt, and ultimately regulate these new technologies? My guest today is Dr. Melanie Mitchell. Melanie is a computer scientist and Professor at the Santa Fe Institute. She is the author of the book, AI: A Guide for Thinking Humans, and she writes a Substack by the same name. This episode is a bit of companion to our recent episode with Steve Flusberg. In that episode, Steve and I attempted a kind of crash course on metaphor and the human mind. Here, Melanie and I sit down for more of an extended case study: how metaphors are guiding, galvanizing, and maybe deceiving us in the contested realm of AI discourse. We unpack seven of the most widely used metaphors in this space. We consider how these metaphors are shaping not only our everyday understandings of AI, but also law and policy. We also talk about the metaphor and analogy capabilities of AI itself. Can these system reason abstractly in the way that humans can? Along the way, Melanie and I touch on: AI-generated poetry, anthropomorphism, the original sin of AI research, the myth of Narcissus, psychometric testing and its pitfalls, metaphors for AI that are a bit hard to spot, and the question of whether an AI has ever come up with a decent analogy for itself. Longtime fans of the show will know that we've had Melanie on the show once before. We invited her back, not only because she's thought about metaphor and analogy in AI discourse for decades, but because she's a voice of calm insight in an area that's increasingly awash in hype and polemic. Longtime fans of the show may also note that we are now celebrating our 6th birthday at Many Minds. That's right, the show launched in February 2020. If you'd like to support us as we recognize this milestone, you can leave us a rating or a review, recommend us to a friend, or give us a shout out on social media. Your support is always appreciated. Without further ado, on to my conversation with Dr. Melanie Mitchell. Enjoy! Notes 3:30 – For an overview of Douglas Hofstadter's work on analogy, see here. 8:00 – Much of our discussion in this interview draws on Dr. Mitchell's piece on the metaphors for AI in Science magazine. 13:30 – For earlier discussions of anthropomorphism on the show, see our earlier episodes here and here. 16:00 – See here for the original discussion of LLMs as "stochastic parrots." 17:00 – See here for the original discussion of ChatGPT as a "blurry jpeg." 18:30 – See here for the original discussion of LLMs as role players. 22:00 – See here for one use of the "LLMs as crowds" metaphor. See also a discussion of this metaphor (and other metaphors for AI) here. 25:00 – For one discussion of AI as a "cultural technology" by Alison Gopnik and colleagues, see here. For a more recent discussion of the same metaphor by Henry Farrell, Alison Gopnik and others, see here. 27:00 – For the podcast series on intelligence that Dr. Mitchell co-hosted for the Santa Fe Institute, see here. 28:00 – See here for an influential formulations of the idea that AI is an "alien intelligence." 29:00 – For philosopher Shannon Vallor's book about AI as "mirror," see here. 31:00 – For the recent study on users' metaphors for AI systems, see here. 33:00 – For more on the rise of social AI, see our earlier episode here. 38:00 – For more on what AI researchers might learn from developmental and comparative psychologists, see Dr. Mitchell's recent post (summarizing here keynote at NeurIPs). 42:00 – For more on the ARC (Abstraction and Reasoning Corpus) and the research that Dr. Mitchell and colleagues have been doing with it, see here and here. 48:30 – For the study on humans' preference for AI-generated poetry, see here. 50:30 – For Brigitte Nerlich's documentation and discussion of various metaphors for AI (including AI's metaphors for itself), see here. Recommendations The AI Mirror, by Shannon Vallor 'Role play with large language models,' by Murray Shanahan (former guest!) et al. 'Large AI models are cultural and social technologies,' by Henry Farrell et al. Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with ...
    続きを読む 一部表示
    56 分
まだレビューはありません