The Brain. David Eagleman
Чтение книги онлайн.
Читать онлайн книгу The Brain - David Eagleman страница 9
The lesson that surfaces from Mike’s experience is that the visual system is not like a camera. It’s not as though seeing is simply about removing the lens cap. For vision, you need more than functioning eyes.
In Mike’s case, forty years of blindness meant that the territory of his visual system (what we would normally call the visual cortex) had been largely taken over by his remaining senses, such as hearing and touch. That impacted his brain’s ability to weave together all the signals it needed to have sight. As we will see, vision emerges from the coordination of billions of neurons working together in a particular, complex symphony.
Today, fifteen years after his surgery, Mike still has a difficult time reading words on paper and the expressions on people’s faces. When he needs to make better sense of his imperfect visual perception, he uses his other senses to crosscheck the information: he touches, he lifts, he listens. This comparison across the senses is something we all did at a much younger age, when our brains were first making sense of the world.
Seeing requires more than the eyes
When babies reach out to touch what’s in front of them, it’s not only to learn about texture and shape. These actions are also necessary for learning how to see. While it sounds strange to imagine that the movement of our bodies is required for vision, this concept was elegantly demonstrated with two kittens in 1963.
Richard Held and Alan Hein, two researchers at MIT, placed two kittens into a cylinder ringed in vertical stripes. Both kittens got visual input from moving around inside the cylinder. But there was a critical difference in their experiences: the first kitten was walking of its own accord, while the second kitten was riding in a gondola attached to a central axis. Because of this setup, both kittens saw exactly the same thing: the stripes moved at the same time and at the same speed for both. If vision were just about the photons hitting the eyes, their visual systems should develop identically. But here was the surprising result: only the kitten that was using its body to do the moving developed normal vision. The kitten riding in the gondola never learned to see properly; its visual system never reached normal development.
Inside a cylinder with vertical stripes, one kitten walked while the other was carried. Both received exactly the same visual input, but only the one who walked itself – the one able to match its own movements to changes in visual input – learned to see properly.
Vision isn’t about photons that can be readily interpreted by the visual cortex. Instead it’s a whole body experience. The signals coming into the brain can only be made sense of by training, which requires cross-referencing the signals with information from our actions and sensory consequences. It’s the only way our brains can come to interpret what the visual data actually means.
If from birth you were unable to interact with the world in any way, unable to work out through feedback what the sensory information meant, in theory you would never be able to see. When babies hit the bars of their cribs and chew their toes and play with their blocks, they’re not simply exploring – they’re training up their visual systems. Entombed in darkness, their brains are learning how the actions sent out into the world (turn the head, push this, let go of that) change the sensory input that returns. As a result of extensive experimentation, vision becomes trained up.
Vision feels effortless but it’s not
Seeing feels so effortless that it’s hard to appreciate the effort the brain exerts to construct it. To lift the lid a little on the process, I flew to Irvine, California, to see what happens when my visual system doesn’t receive the signals it expects.
Dr. Alyssa Brewer at the University of California is interested in understanding how adaptable the brain is. To that end, she outfits participants with prism goggles that flip the left and right sides of the world – and she studies how the visual system copes with it.
On a beautiful spring day, I strapped on the prism goggles. The world flipped – objects on the right now appeared on my left, and vice versa. When trying to figure out where Alyssa was standing, my visual system told me one thing, while my hearing told me another. My senses weren’t matching up. When I reached out to grab an object, the sight of my own hand didn’t match the position claimed by my muscles. Two minutes into wearing the goggles, I was sweating and nauseated.
Prism goggles flip the visual world, making it inordinately difficult to perform simple tasks, such as pouring a drink, grabbing an object, or getting through a doorway without bumping into the frame.
Although my eyes were functioning and taking in the world, the visual data stream wasn’t consistent with my other data streams. This spelled hard work for my brain. It was like I was learning to see again for the first time.
I knew that wearing the goggles wouldn’t stay that difficult forever. Another participant, Brian Barton, was also wearing prism goggles – and he had been wearing them for a full week. Brian didn’t seem to be on the brink of vomiting, as I was. To compare our levels of adaptation, I challenged him to a baking competition. The contest would require us to break eggs into a bowl, stir in cupcake mix, pour the batter into cupcake trays, and put the trays in the oven.
It was no contest: Brian’s cupcakes came out of the oven looking normal, while most of my batter ended up dried onto the counter or baked in smears across the baking tray. Brian could navigate his world without much trouble, while I had been rendered inept. I had to struggle consciously through every move.
Wearing the goggles allowed me to experience the normally hidden effort behind visual processing. Earlier that morning, just before putting on the goggles, my brain could exploit its years of experience with the world. But after a simple reversal of one sensory input, it couldn’t any longer.
To progress to Brian’s level of proficiency, I knew I would need to continue interacting with the world for many days: reaching out to grab objects, following the direction of sounds, attending to the positions of my limbs. With enough practice, my brain would get trained up by a continual cross-referencing between the senses, just the way that Brian’s brain had been doing for seven days. With training, my neural networks would figure out how various data streams entering into the brain matched up with other data streams.
Brewer reports that after a few days of wearing the goggles, people develop an internal sense of a new left and an old left, and a new right and an old right. After a week, they can move around normally, the way Brian could, and they lose the concept of which right and left were the old ones and new ones. Their spatial map of the world alters. By two weeks into the task, they can write and read well, and they walk and reach with the proficiency of someone without goggles. In that short time span, they master the flipped input.
The brain doesn’t really care about the details of the input; it simply cares about figuring out how to most efficiently move around in the world and get what it needs. All the hard work of dealing