Design and the Digital Divide. Alan F. Newell

Чтение книги онлайн.

Читать онлайн книгу Design and the Digital Divide - Alan F. Newell страница 13

Design and the Digital Divide - Alan F. Newell Synthesis Lectures on Assistive, Rehabilitative, and Health-Preserving Technologies

Скачать книгу

How can you subtitle live programmes?

      The key questions are whether or not subtitles should be an exact copy of the dialogue, and whether exactly the same techniques could be used as those for foreign language films. The cost of preparing subtitles was also an issue at that time. With the technology of the day, it took 30 hours to subtitle a one-hour programme, at a cost that had been estimated at one third of one percent of the programme budget. On this basis, the broadcasters took the view that it was too costly to provide subtitles on all but specialized programmes [Newell, A., 1979a, 1982]. Also, with speaking rates (140 to over 200 words per minute) being much faster than typing rates (of the order of 60 wpm), verbatim subtitling of live programmes would be impossible. Even “ergonomic” keyboards cannot be operated at verbatim speeds [Newell and Hutt, 1979d]. At Clark School for the deaf in the U.S., a typewriter keyboard was being used to subtitle live programmes, but only the briefest of synopses was given.

      There was some experience of subtitling for deaf people. TV guidelines had been produced in the U.S. but these were based on the needs of pre-lingually deaf children [Shulman, J., 1979]. In the UK, the British Broadcasting Corporation had significant experience with their subtitled Sunday evening “News Review” programme, and other programmes such as “Life on Earth”. More research was needed, and the Independent Broadcasting Authority (IBA) commissioned my group to research the questions listed in Section 3.1. To this end we employed Rob Baker—a psycholinguist with experience of working with deaf people. This was my first experience of an interdisciplinary project, and having a subject expert as a full time member of the team proved to be crucial to the research.

      The range of linguistic abilities of the viewers of subtitles for the deaf is much greater than for foreign language films, and thus we believed that, in contrast to this type of subtitling, the text would need to be an edited version of the sound track. Our early tests had shown that verbatim subtitles could put too high a reading load on pre-lingually deaf people who tend to have poor literacy. (The average reading age of a pre-lingually profoundly deaf school leaver had been established as approximately eight years, and, for most sign language users, English is a second language). The hearing viewer of foreign language films also receives a great deal of information from the sound track (including, knowledge of who is speaking, emotional content of speech, noises off and background music).

      Editing, however, is not easy—language does not have an homogenous structure, its rules changing dramatically with the talker, the assumed listener, the message and the environment. Thus, in certain circumstances, a single word can make an important difference—the word ‘not’ obviously changes a sentence, but some adjectives also have great importance (e.g. “the Prime Minister said that there was (little) (real) truth in the statement….”). It is also clear that the soundtrack has different roles in different programmes. In sports, it is mainly there to communicate excitement, and a most bizarre form of English can be found in football commentaries, which often have very low information rates, and strange syntax. Music also makes an important contribution to mood. The question arises of how such information should be transmitted in sub-titles.

      We experimented with showing hearing impaired audiences verbatim subtitles, and compared these with various edited subtitles and with ones where a commentary rather than a version of the words spoken was provided. The commentaries were not liked, and the edited subtitles were found to be easier to read. The mismatch between lips and words on screen were not found to be a significant problem, and we found a general preference for edited subtitles, but only edited to produce a manageable reading rate [Baker et al., 1981].

      Different views were expressed for different programme types, e.g., viewers preferred edited subtitles for news and serious drama, whereas verbatim subtitles were preferred for chat shows and comedy. A further interesting finding was the difference between the impact of the spoken and written word, the most obvious example being the use of swear words. These have a greater impact when read as subtitles rather than when heard. There were also cases where the subtitler had to decide whether to retain the words used with the danger of changing the impact, or retain the impact and change the words [Baker and Newell, 1980]. On the basis of these results Rob Baker [1981] produced a comprehensive set of guidelines for the IBA.

      In the UK there is a tendency to provide short, uncomplicated sentences, but in the U.S., fully verbatim subtitling is preferred. This reflects a different compromise between making the subtitles easy to read and being “faithful” to the original spoken words.

      Reading rapidly changing subtitles and watching the picture is not a trivial task, and viewers need all the help they can get. It is very difficult to watch the picture and read rolling subtitles, and our experiments showed that splitting the subtitles into meaningful units was helpful. There is also a question, particularly with live subtitling, whether the subtitles should follow the speech or be presented as meaningful units. We also found that a rectangular box with a black or misted background closely fitting round the subtitles increased readability. The changing shape of this box had the advantage that it gave a clue that the subtitle had changed. On the other hand, retaining a subtitle over a video cut had the effect of the viewer reading the subtitle again.

      In the 1970s a number of groups were investigating subtitling live programs. At Leicester Polytechnic, Booth and Barnden [1979] worked with the BBC. They used a Palantype speech transcription system they developed, which ran on a large computer with an 80,000 word dictionary, and inserted single line subtitles into the television signal. Independent Television used the Palantype Transcription system that we had developed for Jack Ashley. This had a microprocessor with a 1000 word dictionary plus transliteration software and produced multiline subtitles. In 1979, the U.S. government set up a National Captioning Institute, with staff of 40, captioning 16 hours of programmes per week [McCoy and Shumway, 1979]. They produced a set of guidelines in 1980, and were investigating live subtiting using the American Stenograph system. Somewhat later, the Dutch investigated the use of the Velotype (1983) keyboard for live subtitling.

      All live subtitling systems have the disadvantage that the subtitles are a few seconds behind the words spoken (this could be solved by delaying the program by a few seconds, but this is unlikely to be acceptable to the broadcasters). The potential mismatch between the picture and the subtitle can be confusing, and potentially embarrassing. ORACLE (Independent Television’s telextext system) was used to subtitle the 1981 Royal Wedding, using a QWERTY keyboard and the NEWFOR systems (see below). A significant part of the proceedings could be prepared in advance, but not all. A video clip of a county home of Lord Montbatten was shown, with the commentary that “this is where the Royal couple will spend the next three days”. Unfortunately, this caption appeared above the next video clip that was of a four poster bed.

      The Southampton group did not think that the current systems for preparing subtitles for television were particularly efficient, and suggested a research programme to develop a specially designed system. The broadcasters, however, were not convinced of the value of such work. Thus, rather than a funded research programme, this research was begun by a research student. It started with a detailed study of the tasks involved in subtitle preparation. These were:

      • programme preview;

      • text input and formatting;

      • synchronization with video/soundtrack, and timing; and

      • review and modification where necessary.

      Lambourne et al. [1982a] listed the important factors to take into account:

      1.

Скачать книгу