『What can AI teach us about the mind?』のカバーアート

What can AI teach us about the mind?

What can AI teach us about the mind?

無料で聴く

ポッドキャストの詳細を見る

概要

Everyone is talking about AI these days. Often these conversations are about how AI might upend education, or work, or social life, or maybe civilization itself. But among cognitive scientists and psychologists the conversation inevitably drifts toward other questions. What does this latest generation of AI tell us about the human mind? Is it putting old ideas and theories to rest? Is it ushering in new ones? Will AI—in other words—also upend cognitive science? My guests today are Dr. Mike Frank and Dr. Gary Lupyan. Mike is a Professor of Psychology at Stanford University, where his lab focuses on language learning and cognition in children. Gary is a Professor of Psychology at the University of Wisconsin Madison, where his lab studies language and its role in augmenting human cognition. Both Gary and Mike have more recently been thinking a lot about AI and how it is challenging and deepening our understanding of the human mind. In this conversation, we talk about being interested in AI as cognitive scientists—while also being concerned about the technology as people. We discuss the linguistic abilities of frontier LLMs compared to the linguistic abilities of adult humans. We talk about a glaring "data gap" here—the fact that, even though LLMs often rival human abilities, they require orders of magnitude more data to do so. We contrast the capabilities of large language models with so-called BabyLMs. We consider the fact that, as LLMs master language, they also master other abilities—capacities for mathematical reasoning, causal understanding, possibly theory of mind, and more. And we talk about why language might be an especially potent form of input for an AI. Along the way, we touch on reference and the symbol grounding problem, the Platonic Representation Hypothesis, stimulus computability, confabulated citations, pattern matching and jabberwocky, the poverty of the stimulus argument, congenital blindness, Quine's topiary, the limits of in principle demonstrations, the WEIRD problem, and what the astonishing sophistication of disembodied AIs might suggest about the role of bodily experience in human cognition. Before we get to it, one small request: we're currently running a short survey of our listeners. You can find the link in our show notes. If you have a few minutes, we'd really love your input! Alright friends, here's my conversation with Mike Frank and Gary Lupyan. I think you'll enjoy it! Notes 5:00 – For more discussion of "stochastic parrots" and other ways of framing AI systems, see our recent episode with Melanie Mitchell. For the "octopus test," see here. 8:00 – "BabyLMs" are—in contrast to large LMs (aka LLMs)—models that are trained on a more human-scale amount of linguistic input. For more on the BabyLM community, see here. 12:00 – For broad discussion of the use of AIs as "cognitive models," see this paper by Dr. Frank and a colleague. The same paper discusses the idea of "stimulus computability." 18:00 – For Dr. Frank's "baby steps" paper, see here. 20:00 – For more on how Claude understands line breaks, see Anthropic's analysis of the issue here. 23:00 – For work on human-like grammaticality judgments in LLMs, see this paper by a team including Dr. Lupyan. 24:00 – See here for an influential paper on, among other things, how LLMs refute the idea that syntax is unlearnable. The article titled 'How linguistics learned to stop worrying and love the language models' is here; Dr. Lupyan's commentary—'Large language models have learned to use language'—here. 29:00 – For some of Dr. Lupyan's work on the "abstractness" of even concrete concepts, see here. 35:00 – For a classic paper on the so-called symbol grounding problem, see here. 37:00 – For the preprint putting forth the "Platonic Representation Hypothesis," see here. 40:30 – For more on the data gap between children and LLMs—and what accounts for it—see Dr. Frank's paper here. 45:00 – For a sampling of Dr. Frank and colleagues' work comparing language models to children, see here, here, and here. For more on the LEVANTE project, a collaborative effort spearheaded by Dr. Frank, see here. 48:00 – For the preprint—'The Unreasonable Effectiveness of Pattern Matching,' by Dr. Lupyan and a colleague—see here. 55:00 – For more on Dr. Lupyan's perspective on the centrality of language in human cognition, see here. See also this more recent paper, considering the question in light of LLMs. 58:00 – For our earlier episode with Dr. Marina Bedny, see here. For the recent paper by Dr. Bedny and colleagues considering their research on congenital blindness in light of LLMs, see here. 1:01:00 – For classic work on language learning in blind children, see here. 1:02:00 – For a paper by Dr. Lupyan and colleagues on "hidden" individual differences, see here. 1:03:00 – For more on "multiple realizability," see here. For our earlier episode with ...
まだレビューはありません