The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle

Чтение книги онлайн.

Читать онлайн книгу The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle страница 79

The Concise Encyclopedia of Applied Linguistics - Carol A. Chapelle

Скачать книгу

comprised of related discrete lower‐level ability components. While agreement on a comprehensive list of these components has not been reached (nor does there exist an agreed‐upon theory of how these components operate with each other), some research indicates that listening ability may include three lower‐level abilities: the abilities to understand global information, to comprehend specific details, and to draw inferences from implicit information (Min‐Young, 2008). Test developers typically draw upon these in defining a listening construct in the first stages of test development.

      Accent of the speaker is another important factor that affects listening comprehension. Research has shown that the use of different speech varieties can have profound impacts on listening comprehension in assessment contexts, even when those speech varieties are very similar. Most notably, the greater the strength of an accent, that is, the less similar it is to a particular speech variety, the more challenging it is to comprehend (Ockey & French, 2016; Ockey, Papageorgiou, & French, 2016) and it is easier for language learners to comprehend familiar than unfamiliar accents (Tauroza & Luk, 1997; Major, Fitzmaurice, Bunta, & Balasubramanian, 2002; Harding, 2012; Ockey & French, 2016).

      Other important factors of oral communication known to affect listening comprehension include prosody (Lynch, 1998), phonology (Henricksen, 1984), and hesitations (Freedle & Kostin, 1999). Brindley and Slatyer (2002) also identify length, syntax, vocabulary, discourse, and redundancy of the input as important variables.

      Types of interaction and relationships among speakers are also important factors to take into account when designing listening assessment inputs. Monologues, dialogues, and discussions among a group of people are all types of interaction that one would be likely to encounter in real‐world listening tasks. Individuals might also expect to listen to input with various levels of formality, depending on the relationship between the speaker and the listener.

      Decisions about the characteristics of the desired listening assessment tasks should be based on the purposes of the test, the test takers' personal characteristics, and the construct that the test is designed to measure (Bachman & Palmer, 2010). Buck (2001) provided the following guidelines concerning listening tasks, which may be applicable to most listening test contexts: (a) listening test input should include typical realistic spoken language, commonly used grammatical knowledge, and some long texts; (b) some questions should require understanding of inferred meaning (as well as global understanding and comprehension of specific details) and all questions should assess linguistic knowledge—not that which is dependent on general cognitive abilities; and (c) test takers' background knowledge on the content to be comprehended should be similar. The message conveyed by the input, not the exact vocabulary or grammar used to transmit the message, as well as various types of interaction and levels of formality should also be assessed.

      Constructed response item types are also commonly used. They require test takers to create their own response to a comprehension question and have become increasingly popular. These item types require short or long answers, and include summaries and completion of organizational charts, graphs, or figures. One item type that has received increasing attention is an integrated listen–speak item. Test takers listen to an oral input and then summarize or discuss the content of what they have heard (Ockey & Wagner, 2018). Constructed response item types have been shown to be more difficult for test takers than selected response item types (In'nami & Koizumi, 2009) and may therefore be more appropriate for more proficient learners. Most test developers and users have avoided using constructed response item types because scoring can be less reliable and require more resources. Recent developments in computer technology, however, have made scoring productive item types increasingly more reliable and practical (Carr, 2014), which may lead to their increased use.

      Another listening task used in tests today is sentence repetition, which requires test takers to orally repeat what they hear, or the analogous written task of dictation, which requires people to write what they hear. As with constructed response items, computer technology has made the scoring of sentence repetition and dictation objective and practical. Translation tasks, which require test takers to translate what they hear in the target language into their first language, is also a popular task used for assessing listening, especially when everyone who is assessed has the same first language.

      Validly assessing second language listening comprehension presents a number of challenges. The process of listening comprehension is not completely understood, and there are currently no methods which allow a view of the listener's brain to see what has been comprehended. Instead the listener must indicate what has been understood. The medium of this indication, along with other factors, has potential for diminishing the validity of listening assessments.

      The majority of listening tasks require test takers to select responses from given choices or to use speaking, reading, or writing skills to demonstrate comprehension of the input. For instance, most forms of multiple‐choice, true/false, matching, short‐answer and long‐answer items require test takers to read the questions and make a selection or provide a written response, while other tasks—such as sentence repetition—require oral responses. The need for learners to use other language skills when their listening is assessed can lead to scores that are not representative of their listening abilities in isolation, such as when watching a movie.

      Scores on listening assessments are compromised in various ways depending on the tasks that test developers choose to use. Therefore, listening assessment developers and users should take into account the abilities of

Скачать книгу