Research in the Wild. Paul Marshall

Чтение книги онлайн.

Читать онлайн книгу Research in the Wild - Paul Marshall страница 4

Research in the Wild - Paul Marshall Synthesis Lectures on Human-Centered Informatics

Скачать книгу

are impacted by a specific technology, and what people do when encountering a new technology in a given setting. The output can be used to inform the development of new understandings, theories, or concepts about human behavior in the real world. This includes rethinking cognitive theories, in terms of ecological concepts (e.g., situated memory) and socio-cultural accounts (e.g., the effects of digitalization on society). More specifically, RITW can be concerned with investigating an assumption, such as whether or not a technology intervention can encourage people to change a behavior (e.g., exercising more). It can be operationalized in terms of a research question to be evaluated in the wild, such as: will providing free activity trackers to employees to encourage them to develop new social practices at work (e.g., buddying up, competing with each other) that will help them to become fitter and healthier? The perspective taken for this kind of RITW is to observe how people react, change and integrate the technology in question into their everyday lives over a period of time.

      RITW is broad in its scope. Some have questioned the need for yet another term for what many HCI researchers would claim they have been doing for years. Indeed, applied research has been an integral part of HCI, addressing real-world problems, by conducting field studies, user studies and ethnographies. The outputs of which are intended to inform system design, often through community engagement. So, what is the value of coining another label? We would argue that, first, it is now widely used not just in HCI, but also in a number of other disciplines, including biology and psychology, reflecting a growing trend towards pursuing more research in naturalistic settings. Second, the term is more encompassing, covering a wider range of research compared with other kinds of named methodological approaches, such as Action Research, Participatory Design, or Research Through Design. Initial ethnographic research, followed by designing a new user experience, together with the application and/or development of theory, technology innovation, and an in situ evaluation study are often conducted all in one RITW project.

      Hence, while the various components involved in RITW are not new, a single project often addresses several of them. Rather than focus on one aspect, e.g., developing a new technology, advancing a new method, testing the effects of a variable or reporting on the findings of a technology intervention—research in the wild typically combines a number of interlinked strands. Technology innovation can initially inspire the design of a new learning activity that in parallel is framed in terms of a particular theory of learning. Together, they inform the design of an in situ study and the research questions it will address.

      RITW is agnostic about the methods, technologies, or theories it uses. Accordingly, it does not necessarily follow one kind of methodology, where one design phase follows another, but combines different ones to address a problem/concern or opportunity, as deemed fit. Sometimes, theory might be considered central and other times only marginal; sometimes, “off-the-shelf” technology is deployed and evaluated in an in situ study. Other times, the design and deployment of a novel device is the focus. In other settings, the focus of a project is how best to work alongside a community so that a democratic design process is followed.

      The multiple decisions that have to be made when operationalizing a problem are often the main drivers, shaping how the proposed research will address identified questions, what methods/technologies to use and what can be learned. In summary, RITW is broadly conceived, accommodating a diversity of methodologies, epistemologies and ways of doing research. What is common to all RITW projects is the importance placed on the setting and context, conducting research in the everyday and in naturalistic environments.

      A long-standing debate in HCI is concerned with what is lost and gained when moving research out of a controlled lab setting into the wild (Preece et al., 2015). An obvious benefit is more ecological validity—an in situ study is likely to reveal more the kinds of problems and behaviors people will have and adopt if they were to use a novel device at home, at work, or elsewhere. A lab study is less likely to show these aspects as participants try to work out what to do in order to complete the tasks set for them, by following instructions given. They may find themselves having to deal with various “demand characteristics”—the cues that make them aware of what the experimenter expects to find, wants to happen or how they are expected to behave. As such, ecological validity of lab studies can be less reliable, as participants perform to conform to the experimenter’s expectations.

      A downside of evaluating technology in situ, however, is the researcher losing control over how it will be used or interacted with. Tasks can be set in a lab and predictions made to investigate systematically how participants manage to do them, when using a novel device, system, or app. When in the wild, however, participants are typically given a device to use without any set tasks provided. They may be told what it can do and given instructions on how to use it but the purpose of evaluating it in a naturalistic setting is to explore what happens when they try to use it in this context—where there may be other demands and factors at play. However, this can often mean that only a fraction of the full range of functionality, that has been designed as part of the technology, is used or explored, making it difficult for the researchers to see whether what has been designed is useful, usable, or capable of supporting the intended interactions.

      To examine how much is lost and gained, Kjeldskov et al. (2004) conducted a comparative study of a mobile system designed for nurses in the lab vs. in the wild. They found that both settings revealed similar kinds of usability problems but that more were discovered in the lab than in the wild study. However, the cost of running a study in the wild was considerably greater than in the lab, leading them to question “Was it worth the hassle?” They suggest that in the wild studies might be better suited for obtaining initial insights for how to design a new system that can then feed into the requirements gathering process, while early usability testing of a prototype system can be done in the confines of the lab. This pragmatic approach to usability testing and requirements gathering makes good sense when considering how best to develop and progress a new system design. In a follow-up survey of research on mobile HCI using lab and in the wild studies, Kjeldskov and Skov (2014) concluded that it is not a matter of one being better than the other but when best to conduct a lab study vs. an in the wild study. Furthermore, they conclude that when researchers go into the wild they should “go all the way” and not settle for some “half-tame” setting. Only by carrying out truly wild studies can researchers experience and understand real-world use.

      Findings from other RITW user studies have shown how they can reveal a lot more than identifying usability problems (Hornecker and Nicol, 2012). In particular, they enable researchers to explore how a range of factors can influence user behavior in situ—in terms of how people notice, approach, and decide what to do with a technology intervention—either one they are given to try or one they come across—that goes beyond the scope of what is typically able to be observed in a lab-based study. Rogers et al. (2007) found marked differences in usability and usefulness when comparing a mobile device in the wild and in the lab; the mobile device was developed to enable groups of students to carry out environmental science, as part of a long-term project investigating ecological restoration of urban regions. The device provided interactive software that allowed a user to record and look up relevant data, information visualizations, and statistics. The device was intended to replace the existing practice of using a paper-based method of recording measurements of tree growth when in the field. Placing the new mobile device in the palms of students on a cold spring day revealed a whole host of unexpected, context-based usability and user experience problems. Placing the device in the palms of students on a hot summer day revealed a quite different set of unexpected, context-based usability and user experience problems. The device was used quite differently for the different times of year, where foliage and other environmental cues vary and affect the extent to which a tree can be found and identified.

      Other studies have also found how people will often approach and use prototypes differently in the wild compared with in a lab setting (e.g., Brown et al., 2011; Peltonen et al., 2008; van der Linden et al., 2011). People are often inventive and creative in what they do when coming across

Скачать книгу