Engineering Acoustics. Malcolm J. Crocker
Чтение книги онлайн.
Читать онлайн книгу Engineering Acoustics - Malcolm J. Crocker страница 61
![Engineering Acoustics - Malcolm J. Crocker Engineering Acoustics - Malcolm J. Crocker](/cover_pre855647.jpg)
4.2.3 Theories of Hearing
Pythagoras in the sixth century BCE was perhaps the first to recognize that sound is an airborne vibration [10]. Hippocrates in the fourth century BCE recognized that the air vibrations are picked up by the eardrum but thought that the vibrations were transmitted directly to the brain by bones. In 175 CE, Galen of Pergamum, a Greek physician, realized that it was nerves that transmitted the sound sensations to the brain. Galen and most other early scientists and philosophers proposed, mistakenly, however, that somewhere deep in the head was a sealed pocket of implanted air which was the “seat” of hearing. This view was popularly held until 1760 when Domenico Cotugno declared that the inner ear (cochlea) was completely filled with fluid [10].
In 1543, Andreas Vesalius published his treatise on anatomy giving a description of the middle ear and in 1561 Gabriello Fallopio described the cochlea itself.
In 1605 Gaspard Bauhin put forward a resonance theory for the ear. In his model, different air cavities were excited by different frequencies. However, he knew little of the construction of the inner ear. Du Verney, in 1633, developed a more advanced theory by postulating that different parts of the ridge of bone which twists up the inside of the cochlea resonated at different frequencies which depended upon its width. Du Verney's theory was held until 1851 when Alfonso Corti, using a microscope, discovered that the thousands of hair cells on the basilar membrane were attached to the ridge of bone in the cochlea.
A few years later, Hermann von Helmholtz used Corti's findings to suggest a new theory of hearing. In Helmholtz's theory, as it became refined, different parts of the basilar membrane resonated at different frequencies. Later workers showed that Helmholtz was not exactly right (the basilar membrane is not under tension). However, in 1928 Georg von Békésy did show that waves do travel along the basilar membrane and different sections of the basilar membrane do respond more than others to a certain sound. The region of maximum response is frequency‐dependent and as Helmholtz had predicted, von Békésy found that the high‐frequency sound is detected nearer to the oval window and the low‐frequency sound, nearer to the apex (Figures 4.3 and 4.4).
4.3 Subjective Response
So far we have traced the sound signal down the ear canal to the eardrum, through the auditory ossicles, through the oval window to the cochlear fluid to the basilar membrane and the hair cells, and finally to the neural impulses sent to the brain. How does the brain interpret these signals? Our study now enters the realm of psychology. While the physicist or engineer talks about sound pressure level and frequency, the psychologist talks about loudness and pitch, respectively. The human auditory response to sound is studied by psychoacoustics. In Section 4.3 we shall discuss the relationships between some of the engineering descriptions of sound and the psychological or subjective descriptions of psychoacoustics.
4.3.1 Hearing Envelope
Figure 4.5 presents the auditory field for an average, normal young person who has not suffered any hearing loss or damage. The lower curve represents the hearing threshold, that is, the quietest audible sound at any frequency. The upper curve represents the discomfort threshold, that is, the sound pressure level at any frequency at which there is a sensation of discomfort and even pain in the ears. Speech is mainly in the frequency range of about 250–6000 Hz and at sound pressure levels between about 30–80 dB at 1–2 m (depending upon frequency). Of course, the sound pressure level of speech can approach 90 dB at about 0.2–0.3 m from someone if they are shouting loudly. The sound of vowels is mostly in the low‐frequency range from about 250 to 1000 Hz, while the sound of consonants is mainly in the higher frequency range of about 1000–6000 Hz. Music is spread over a somewhat greater frequency range and a greater dynamic range than speech. (The dynamic range represents the difference in levels between the lowest and highest sound pressure levels experienced.)
Figure 4.5 Human auditory field envelope.
4.3.2 Loudness Measurement
The way in which the brain interprets the neural pulses is still a matter for research. However, various experiments have been conducted on groups of people to determine people's average sensation of loudness, etc. We should stress that no one's hearing is exactly the same as any other and hence we must find statistical responses.
Figure 4.6 shows equal loudness contours for pure tone sounds. Note that the lowest curve in Figure 4.6 is labeled MAF (minimum audible field). This is the hearing threshold, the quietest sounds, on average, at any frequency that average young people can hear.
Figure 4.6 Equal loudness contours. The contours join the sound pressure levels of different frequency pure tones that are judged to be equally loud. The numbers on each contour are the loudness levels in phons.
We should note that there are two ways that we can measure the hearing threshold and the equal loudness contours. The first way is to present the listener with a free progressive wave field at a discrete frequency. This is the method which was used to obtain the results in Figure 4.6. Such measurements are normally made with the listener facing the source in an anechoic room where there are no reflections. The second way (which is used more frequently) is to present the listener with sounds played through earphones. There are some small differences in the results obtained by the two methods. The equal loudness contours are determined as follows. For the 60‐phon curve, the listener is first presented with a pure tone at 1000 Hz at a sound