If you understand the sentence that you are reading right now, we invite you to pause and ask yourself: how is that even possible? Your eyes are just staring at squiggly lines on a screen. But your mind somehow recognizes these lines as letters, combines them into words, and figures out how these words all fit together to create phrases and sentences that are well-formed and have complex meanings. The fact that you are similarly able to make sense of air vibrations (when you listen to someone speak), or hand and face making dynamic shapes (in sign language), is no less remarkable. For most of us, understanding languages that we are fluent in feels automatic and effortless most of the time. But underneath the surface, our minds have to overcome many challenges to make comprehension possible, and they carry out a variety of highly sophisticated computations and combine information from many sources. In our research, we are discovering how our minds comprehend language, and how they do it so quickly, reliably, and flexibly.
Studying language comprehension is important for several reasons. First, it is a defining characteristic of our species (all cultures have language, but no other animal does). We want to understand what is so special about the human mind that allows us to use language. Second, knowing how language "works" in our minds could help us treat individuals whose language comprehension mechanisms are impaired, for instance, due to brain damage or developmental disorders. Third, designing artificial intelligence systems that understand language (like Siri, Alexa, Google Translate, and ChatGPT) should take inspiration from the best language comprehension system that currently exists: the human brain.
What are the questions?
The mission of our research is to carve the process of “comprehension” into its components: to determine (1) the distinct mechanisms involved in language processing; (2) the division of “mental labor” across them during comprehension; and (3) their place within the broader architecture of the human mind. This mission addresses questions at three levels:
First, which components of comprehension get their own, dedicated cognitive machinery? Which are inseparable from one another and are supported by a joint mechanism? And which rely on general cognitive mechanisms that serve many domains beyond language?
Second, when we “know the meaning” of a word, phrase, or sentence, what kind of knowledge do we have? What is the format of the mental structures that our minds construct during comprehension? Which distinctions in meaning do these structures make more—or less—salient? And what algorithms are used to manipulate these structures?
Third, how is comprehension implemented in the neural circuits of the brain? What constraints on processes of comprehension are placed by the characteristics of the biological tissue inside our skulls? How do the neural mechanisms or language processing mature from infancy to adulthood? How does the brain deal with comprehension challenges, such as learning a new language, conversing in a loud bar, listening to speech in an unfamiliar accent, deteriorating with age, or recovering from injury? What determines whether the brain succeeds in facing these challenges?
Our lab studies how comprehension evolves in our minds using a combination of neuroimaging, behavioral, and machine learning approaches.
Our neuroimaging research uses a unique approach. We are one of only a handful of labs around the world that study language by combining four tools, each with its own strengths that compliment the others: first, we use precision mapping to identify functional regions separately in each individual brain; second, we measure neural activity in these networks as participants passively listen to stories—this is a naturalistic paradigm that mimics comprehension “in the wild”, rather than relying on artificial experiments that study comprehension “in the lab”; third, we analyze our data using state-of-the-art computational analyses; and fourth, we use methods from network neuroscience to characterize brain regions as integrated systems rather than in isolation.
In a secondary line of work, we use behavioral methods from psycholinguistics (such as measuring reading times) to study (1) how the human minds combines different sources of information to understand the meaning of a word, phrase, or sentence; and (2) how different cognitive systems work together to achieve that goal.
Finally, some of our projects use computational methods to evaluate representations of meaning that are generated by machine learning systems that process language (such as artificial neural networks trained with deep learning). We examine what knowledge—about words, their combinations, and the underlying concepts—is implicitly captured by these representations, and compare it against human knowledge. We are uncovering ways in which those machines are similar to the human mind, ways in which they are different, and ways in which they can inform cognitive theories about the nature of language and meaning.
In our main line of work, we are determining how language engages our brains. Using functional MRI, we characterize functional networks—sets of regions showing coordinated activity—that are recruited when adult native speakers understand language. What is the internal organization of these networks? Which networks are distinct vs. overlapping? How do they each contribute to comprehension? And how is information integrated across them?
How do semantic and syntactic processes interact?
How does non-linguistic (e.g., conceptual) information influence linguistic processing?
What can computational models trained on large collections of text learn about language, and about the world?
What is the role of domain-general cognitive resources in language comprehension?
What is the relationship between the brain's language processing network and other functional networks (e.g., those supporting fluid intelligence, episodic cognition, or social cognition)?