The Behavior of Animals. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу The Behavior of Animals - Группа авторов страница 24
There are promising studies concerning modulatory functions of diencephalic pretectal/thalamic and hypothalamic nuclei on the stimulus-response pathways that mediate prey-catching and threat-avoiding (e.g., see Ewert & Schwippert 2006; Islam et al. 2019; Prater et al. 2020).
Sensorimotor codes
The concept of command releasing system CRS interprets Tinbergen’s concept of (innate) releasing mechanism in a neurophysiological context.
A CRS considers combinatorial aspects of stimulus perception as a sensorimotor code in a sensorimotor interface. A coded command involves different types of neurons, each type monitoring or analyzing a certain stimulus aspect, e.g., prey-selective T5.2 neurons. The idea is that a certain combination of such command elements cooperatively activates a certain motor pattern–generating system in the presence of adequate motivational and attentional inputs. It is suggested that certain command elements can be shared by different sensorimotor codes.
Modeling toad’s visual pattern recognition
Building on the neuroethological results of the toad’s visual system (e.g., Figure 2.10), artificial pattern recognizers were developed—using systems theoretical approaches (Ewert & v. Seelen 1974; cit. Ewert 1984)—computer models taking advantage of the relevant cytological brain structures (Lara et al. 1982), and artificial neuronal nets, ANNs, trained by backpropagation algorithms (Ewert 2004). Different ANNs applying algorithms for reinforcement learning, classical contitioning, and genetic operations are described by Reddipogu et al. (2002) and Yoshida (2016). Hence, there are various ways of modeling brain/behavior functions: global models are heuristic; ANNs subserve approximation and optimization, e.g., by implementation of an algorithm.
Why modeling? 1) A model offers a representation of the processes within the modeled system. Hence, models have explanatory function. 2) Models are predictive. Predictions can be tested by adequate experiments. The results, in turn, may improve the model. 3) Models are sort of creative since they may exhibit unexpected properties. 4) Models provide tools toward artificial intelligence, such as in the growing field of neuroengineering.
For example, the German Federal Ministry for Research and Technology (BMFT) supported a joint project called “Sensori-Motor Coordination of Robotic Movements with Neuronal Nets” SEKON established in 1991–1994 by scholars from neurobiology, neuroinformatics, and robotics. To study interfaces between perception and action, in one experimental platform a modular structured ANN simulating toad vision (Fingerling et al. 1993)—in connection with a CCD-camera—instructed a robot to select and pick out differently shaped work pieces from a conveyor belt (see also Further Reading, Movie A1).
Visual Perception in Primate Cortex: Dedicated, Modifiable, Crossmodal, and Multifunctional Properties in Concert
Roughly comparable to toads and other vertebrates, sensory information processing in primates proceeds in a parallel-distributed and interactive fashion. Tremendous complexity arises from cortical neuronal circuits with regard to feature analysis, plasticity, multisensory integration, and sensory substitution (Kaas 1991). Combining and binding of features is a common task (Singer 1995). There is no one single place for perception at the “top” of a sensory system. All processing levels contribute to the resulting picture (Damasio 1990).
In primate vision, Ungerleider and Mishkin (1982) showed that two neural processing streams are involved to answer the questions “what kind of object?” and “where is the object?” (see also Hubel & Livingstone 1987).
Ventral processing stream answering “what”
This processing stream originates in the small-celled system of the retina, passes the related structures of the diencephalic lateral geniculate nucleus, LGN, and reaches—via corresponding cortical areas V1-4—the integration and association fields of the inferior temporal cortex, ITC (Figure 2.11).
Figure 2.11 Visual processing streams in the primate cortex. Simplified diagram; bidirectional pathways not shown. LGN, lateral geniculate nucleus; V1-V5, visual cortical areas; ITC, inferior temporal cortex; PPC, posterior parietal cortex. (Compiled after data by Ungerleider & Mishkin 1982; Hubel & Livingstone 1987).
In area V1 the different orientations of contrast borders in terms of lines (|/\—) are determined by certain neurons arranged in columns as prerequisite for the analysis of combinations of lines and angles between them (L V ∧ T) explored in areas V2 and V3 (Hubel & Wiesel 1977). Shape and color are analyzed separately in layers of area V2 and are combined in V4, thus allowing assignments like “yellow banana.” Associations depend on connections with the hippocampus.
The ITC is involved in the recognition of gestures and postures, e.g., suitable for social communication. Neuronal responses selective to faces were discovered by Perrett and Rolls (1983) (Chapter 5). Comparable face-selective neurons were recorded in the temporal cortex of Dalesbred sheep: some neuron types preferring a conspecific’s face, others responding selectively to a German shepherd dog’s face (Kendrick 1994). It is suggested that an assembly of differently face-tuned neurons code for the recognition of an individual face (cf. Cohen & Tong 2001).
Dorsal processing stream answering “where”
“Where is the object?” deals with “how should it be responded to?” This requires spatial vision in connection with analyses of object motion and depth: starting in the large-celled retina, continuing in related structures of LGN and processing—via corresponding areas V1-3, V5—in the posterior parietal cortex, PPC (Figure 2.11).
The PPC contains neurons responsible for target-oriented reaching or grasping involving arm, hand, and fingers. Such neurons fulfill integrative tasks. Motivation plays an essential role. If a satiated monkey was offered a banana, its visual fixation neurons failed to respond or discharged sluggishly and the animal ignored the banana (Mountcastle et al. 1975), in reminiscence of a comparable situation observed in toads.
The “what” and “where/how” processing streams are not completely segregated. A patient with damage to the “what” stream was able to reach for an object; however, if the object’s shape required an appropriate grasping pattern, the patient failed to grasp it.
Selective attention: what an individual does not like to see, it may not see
Animals, including humans, may guide their perception toward interesting parts of a scene and suppress uninteresting ones. In a behavioral experiment monkeys were trained to draw their attention either to a red or a green stripe (Barinaga 1997). In the neurophysiological experiment both stripes were presented in the excitatory visual receptive field of a red-sensitive neuron of area V4. The neuron responded to the red stimulus. If both stimuli were presented and the monkey was prompted to focus on the red one, the neuron fired as expected. However, requested to focus on the green stripe, the red-sensitive neuron was silent albeit the red stripe, too, was present in its excitatory receptive field.
Studies applying functional neuroimaging technologies in humans offer a look at the activity pattern in cortical visual areas. The regional