American Music Documentary. Benjamin J. Harbert
Чтение книги онлайн.
Читать онлайн книгу American Music Documentary - Benjamin J. Harbert страница 15
Edit Encourages Reductive Listening through Gesture
Why gestures? Gesture draws attention to moments in time unlike language does. As David McNeill argues, a fundamental difference between words and gestures is that the latter “are themselves multidimensional and present meaning complexes without undergoing segmentation or linearization. Gestures are global and synthetic and never hierarchical” (1992: 19). Language requires time to combine words into a whole structure. (It took you real time to assemble these very words into an idea.) McNeill describes: “In language, parts (the words) are combined to create a whole (a sentence); the direction thus is from part to whole. In gestures, in contrast, the direction is from whole to part. The whole determines the meanings of the parts” (1992: 19). Anyone who has tried to talk about music as it is playing knows that the syntax gets in the way of the music. The phrases that work are short and tend to point: “Here it comes!” “Listen!” “Wait for it …” McNeill suggests that, when paired with language, a variety of gestures can combine to create meaningful idea units: “synchronized speech and gestures where the meanings complement one another” (27). Ethnomusicologist Matt Rahaim extends McNeill’s concept to music, suggesting that gesture can similarly combine with music, forming interdependent pairs in North Indian Hindustani singing (2012: 7). His examples include a grabbing and pulling gesture accompanying an abrupt increase of loudness as well as a downward series of loops accompanying a terraced descending melody. Rahaim suggests that these gestures are powerful in musical performance because other bodies offer sympathetic ways of knowing (10). We feel the motion as we hear the sound. And corroborating McNeill’s suggestion that gesture is less systematized than language or that it is semicultural, Rahaim finds that gesture is both idiosyncratic and inherited through learning and practice (134). There is no one-to-one mapping of gesture to musical idea, though it is possible to make connections between kinesthesis and musical expression.
Zwerin cuts Jagger’s motion to create idea units that are both heard and felt. Chion develops the useful term “synchresis” to describe the forging together of an aural and visual event (1994: 63). In his example, the image of a human head being smashed and the sound of a watermelon being smashed form one inseparable syncretic event. Musicological synchresis, then, can be a useful way of understanding how visual elements combine with musical events to create a series of audiovisual experiences of music.
While McNeill argues that gestures are noncombinatoric (1992: 21), film can recombine gesture into parts, visual instances that align with an integral whole of sound. Here, Jagger’s motions combine with musical events in a slow twelve-bar blues in twelve-eight time. Zwerin is directing our attention with plenty of musicological synchresis. Figure 1.2 shows connections between the music and the images of Jagger. The first shot of Jagger synchronizes his head movement with an alternation of V and I chords. As illustrated in the figure, a sagittal shot accompanies the V chord and a frontal shot accompanies the I. The direction of Jagger’s face mirrors the feeling of leaving and returning to the tonic. There are three occasions in which Zwerin places an image of Jagger raising either his shoulders or his body on a IV chord and lowering them on the I chord. In these cases, the shoulders iconically match the plagal cadence of a C major chord resolving to a G major chord.
In the twelfth bar, Jagger raises his hands to prepare a stroke for the next beat. Analyzing gesture and speech, McNeill identifies the preparation, stroke, and retraction. “The stroke of the gesture precedes or ends at, but at least does not follow, the phonological peak syllable of speech” (1992: 26). In a similar fashion, Zwerin places Jagger’s gestures in a way that the gesture lands on a musical event. On the first beat of the next chorus, Jagger’s hands drop along with a superimposition of him in close-up while the sound of the band joins. A prominent electric guitar slide coincides with the abrupt beginning of image superimposition. The sideways motion of the two overlaid images of Jagger also matches the semitone slide of the guitar. (Note the amount of time it took to read this textual description of about two seconds of film. Gesture can keep up with the temporality of music.) The superimpositions often occur with instruments becoming prominent in the mix, drawing attention to the arrangement. Slides of the guitar often accompany Jagger’s horizontal motion across the screen. Repetition syncs with Jagger’s spinning body. Another plagal cadence accompanies the rising and falling of a slow-motion jump. Claps sync perfectly with the snare backbeat. Watching, you may simply relish Jagger’s moves, but Zwerin has offered us an opportunity for a more precise reduced listening to harmony, arrangement, repetition, and musical form.
We watch carefully because of the slowed motion. We feel the musical idea of sympathetic proprioception. We sense the feelings of other bodies, in this case, the musical events through Jagger’s body. What’s more, Zwerin uses superimposition and crossfades to deemphasize visual cuts that might compete with the musical events she shows us. Chion argues that while visual cuts are generally clear—we can easily count the number of cuts in a film—aural cuts are generally masked. We rely on sound for temporal continuity when viewing a discontinuous series of shots (1994: 40–41). Chion suggests that we have a better understanding of sound in film when looking for sound events, markers of significance within a representation of time: a dog barks, a door slams, thunder claps. Carry that analytic method to music on film and we may see music as a series of musical events that compete with visual cuts. Superimposition and crossfades draw less attention to visual disturbances and let us watch (and feel) the musical events.
1.2. Head position matching chord changes and shoulder motion matching plagal cadence.
Description of the Significant Transdiegetic Shift
The most radical moment in the “Love in Vain” sequence is a cut to Jagger looking right, motionless. The lyrics sound over the image, “The blue light was my baby …” Then, a red filter is taken off the image as the guitar shifts to a vi chord (the E minor substituting for the C major) on beat ten of the twelve-bar blues. It somehow fits the music. The image is the same, but the color is different. The E minor differs from the C major by only one note—the B, which moves to a C for the second half of the bar. The color gesture precedes the harmonic shift. The lyrics continue, “The red light was my mind.” At this point, the camera zooms out and pans around what appears to be the control room of a recording studio. We experience the shift of place after it happens. Our attentiveness to the music, heightened by the slow-motion musicological synchresis, holds although we are now in real space, in real time, in real light. Why isn’t anyone moving? As the camera continues to pan, Richards is revealed to be lying on the floor. A studio monitor beside him, he taps in time with the music. As you look at Richards and Jagger, you might suddenly think: “They’re actually hearing what I hear!” While