[ad_1]
Nearly 30 years ago, scientists demonstrated that visually recognizing an object, such as a cup, and performing a visually guided action, such as picking the cup up, involved distinct neural processes, located in different areas of the brain. A new study shows that the same is true for how the brain perceives our environment — it has two distinct systems, one for recognizing a place and another for navigating through it.
The Journal of Neuroscience published the finding by researchers at Emory University, based on experiments using functional magnetic resonance imaging (fMRI). The results showed that the brain’s parahippocampal place area responded more strongly to a scene recognition task while the occipital place area responded more to a navigation task.
The work could have important implications for helping people to recover from brain injuries and for the design of computer vision systems, such as self-driving cars.
“It’s thrilling to learn what different regions of the brain are doing,” says Daniel Dilks, senior author of the study and an assistant professor of psychology at Emory. “Learning how the mind makes sense of all the information that we’re bombarded with every day is one of the greatest of intellectual quests. It’s about understanding what makes us human.”
Entering a place and recognizing where you are — whether it’s a kitchen, a bedroom or a garden — occurs instantaneously and you can almost simultaneously make your way around it.
“People assumed that these two brain functions were jumbled up together — that recognizing a place was always navigationally relevant,” says first author Andrew Persichetti, who worked on the study as an Emory graduate student. “We showed that’s not true, that our brain has dedicated and dissociable systems for each of these tasks. It’s remarkable that the closer we look at the brain the more specialized systems we find — our brains have evolved to be super efficient.”
Persichetti, who has since received his PhD from Emory and now works at the National Institute of Mental Health, explains that an interest in philosophy led him to neuroscience. “Immanuel Kant made it clear that if we can’t understand the structure of our mind, the structure of knowledge, we’re not going to fully understand ourselves, or even a lot about the outside world, because that gets filtered through our perceptual and cognitive processes,” he says.
The Dilks lab focuses on mapping how the visual cortex is functionally organized. “We are visual creatures and the majority of the brain is related to processing visual information, one way or another,” Dilks says.
Researchers have wondered since the late 1800s why people suffering from brain damage sometimes experience strange visual consequences. For example, someone might have normal visual function in all ways except for the ability to recognize faces.
It was not until 1992, however, that David Milner and Melvyn Goodale came out with an influential paper delineating two distinct visual systems in the brain. The ventral stream, or the temporal lobe, is involved in object recognition and the dorsal stream, or the parietal lobe, guides an action related to the object.
In 1997, MIT’s Nancy Kanwisher and colleagues demonstrated that a region of the brain is specialized in face perception — the fusiform face area, or FFA. Just a year later, Kanwisher’s lab delineated a neural region specialized in processing places, the parahippocampal place area (PPA), located in the ventral stream.
While working as a post-doctoral fellow in the Kanwisher lab, Dilks led the finding of a second region of the brain specialized in processing places, the occipital place area, or OPA, located in the temporal stream.
Dilks set up his own lab at Emory the same year that discovery was published, in 2013. Among the first questions he wanted to tackle was why the brain had two regions dedicated to processing places.
Persichetti designed an experiment to test the hypothesis that place processing was divided in the brain in a manner similar to object processing. Using software from the SIMS life simulation game, he created three digital images of places: A bedroom, a kitchen and a living room. Each room had a path leading through it and out one of three doors.
Study participants in the fMRI scanner were asked to fixate their gaze on a tiny white cross. On each trial, an image of one of the rooms then appeared, centered behind the cross. Participants were asked to imagine they were standing in the room and indicate through a button press whether it was a bedroom, a kitchen or a living room. On separate trials, the same participants were also asked to imagine that they were walking on the continuous path through the exact same room and indicate whether they could leave through the door on the left, in the center, or on the right.
The resulting data showed that the two brain regions were selectively activated depending on the task: The PPA responded more strongly to the recognition task while the OPA responded more strongly to the navigation task.
“While it’s incredible that we can show that different parts of the cortex are responsible for different functions, it’s only the tip of the iceberg,” Dilks says. “Now that we understand what these areas of the brain are doing we want to know precisely how they’re doing it and why they’re organized this way.”
Dilks plans to run causal tests on the two scene-processing areas. Repetitive transcranial magnetic stimulation, or rTMS, is a non-invasive technology that can be attached to the scalp to temporarily deactivate the OPA in health participants and test whether someone can navigate without it.
The same technology cannot be used to deactivate the PPA, due to its deeper location in the temporal lobe. The Dilks lab plans to recruit participants suffering brain injury to the PPA region to test for any effects on their ability to recognize scenes.
Clinical applications for the research include more precise guidance for surgeons who operate on the brain and better brain rehabilitation methods.
“My ultimate goal is to reverse-engineer the human brain’s visual processes and replicate it in a computer vision system,” Dilks says. “In addition to improving robotic systems, a computer model could help us to more fully understand the human mind and brain.”
[ad_2]