Using Stories to Understand How the Brain Represents Words
Human beings have the unique ability to extract the meaning, or semantic content, from spoken language. Yet little is known about how the semantic content of everyday narrative speech is represented in brain. UC-Berkeley Neuroscience Postdoc Alex Huth used a new fMRI-based approach to show that semantic information is represented in complex cortical maps that are highly consistent across subjects. Using BOLD data collected while subjects listened to several hours of natural narrative stories, Huth constructed voxel-wise semantic regression models that accurately predict BOLD responses based on semantic features extracted from the stories. These semantic features were defined using a statistical word co-occurrence model. Huth then used a novel Bayesian generative model of cortical maps to discover how the representations revealed by voxel-wise modeling are organized across the cortical sheet. The results of these analyses show that the semantic content of narrative speech is represented across parietal cortex, prefrontal cortex, and temporal cortex in complex maps comprising dozens of semantically selective brain areas.
Alex Huth is currently a neuroscience postdoc in Dr. Jack Gallant's laboratory at UC Berkeley, where he does work in computational and experimental neuroscience using fMRI.
The Health Humanities Lab, run by the Franklin Humanities Institute, is an initiative that bridges the humanities, Duke Health, and the Duke Global Health Institute.