Eye Tracking the User Experience. Aga Bojko
Чтение книги онлайн.
Читать онлайн книгу Eye Tracking the User Experience - Aga Bojko страница 12
FIGURE 2.7 Fixation count heatmaps of participants attempting the “find a list of upcoming conferences” task (left: original design; right: proposed redesign). The boxed links are the correct targets.
Another interesting finding was that participants tended to look at the target link (Meetings & Education) more than once prior to its selection in the original design. In the proposed redesign, however, everyone selected the target link (Meetings) the first time they looked at it. This suggested that “Meetings” was easier to recognize and associate with conferences on its own rather than when used in combination with “Education.”
Case Study: Car Charger Packaging
Why Eye Tracking?
A product in a store often has only a few seconds to tell its story before the customer moves on to the next product. Not only must the product package immediately attract the consumer’s attention, but it also has to quickly convey what is inside. This sounds rather obvious, yet a lot of packaging has failed on one or both of these accounts.
At the beginning of a usability study of a new mobile device accessory, participants were exposed to its intended packaging for several seconds. When asked what was in the package, most had no idea. Only upon closer inspection of the box, were the participants able to deduce that it housed a universal car charger, a device used for charging multiple gadgets in a car. Because these chargers would end up on a store shelf with several other chargers and phone accessories, it was unrealistic to expect customers to spend extra time with the package to determine what was inside when so many other choices were available.
Based on this finding, the designers wanted to make the product name more visible. But the name was already fairly large, and it was hard to imagine that anyone would miss it. To get to the bottom of the issue, we conducted a small follow-up eye tracking study.
How Eye Tracking Contributed to the Research
The eye movement data indicated that the product name received a great deal of attention. The recorded gaze patterns showed that participants not only noticed the text, but also appeared to have read it. But how could they have read the product name and not known what was in the package? The name of the product was “Smart Charge Mobile: A Universal Way to Charge Your Devices,” which apparently did not convey the fact that it was a car charger.
The package also displayed a car icon with a description “power multiple devices while in your car,” but this information was missed because another package element, three large red icons depicting a cell phone, digital camera, and a PDA, monopolized the rest of the participants’ attention, at least at first.
In this study, eye tracking revealed that participants were unable to determine what the product was based on the most prominent information on the package. This came as a surprise to the stakeholders; they assumed that the fact the product was a car charger was obvious and did not need to be specifically called out. That was why the design placed a lot more emphasis on the fact that the charger was universal.
Two solutions for the package redesign were recommended: incorporating the word “car” in the name of the product and changing the icon design to shift the weight from the three red icons to the car icon and the text next to it.
Quantitative Insight: Measuring Differences
Quantitative insight generated by eye tracking is most useful in summative studies that evaluate products or interfaces relative to one another or to benchmarks. You can compare alternative versions of the same interface, the interface of interest to those created by competitors, or even elements within one interface (for example, different ad types) to one another along either performance-related or attraction-related dimensions. How are these comparisons actionable? They inform decisions such as which design version should be selected or if the product is ready for launch.
Sometimes, you may be asked to conduct quantitative eye tracking studies with only one interface. As there are no absolute standards for eye tracking measures in the UX field, the data obtained from one design carry little meaning. If participants made an average of 10 fixations to find the Buy button on a Web page, there is no way of classifying their performance as efficient or inefficient. Similarly, if 65% of participants looked at a package on a store shelf, this could be good or bad news for the stakeholders. This is no different from time on task and other quantitative usability measures—with nothing to compare the data to, you cannot interpret them and make them actionable. Only if two or more interfaces or packages were tested, could you say which one made participants more efficient or drew more attention.
The eye tracking metrics most relevant to UX are described in Chapter 7, “Eye Tracking Measures,” while Chapter 13, “Quantitative Data Analysis,” explains how to analyze them. But before we delve into all the details associated with quantitative analysis, let’s look at the two types of differences eye tracking can measure and their examples.
Measuring Performance-Related Differences
Eye tracking measures allow you to make comparisons between stimuli along performance-related dimensions, such as search efficiency, ease of information processing, and cognitive workload. While it is true that you can also use measures such as time on task or task completion rate to identify performance-related differences between interfaces, eye tracking data can help detect differences more subtle and difficult to observe in a lab environment using more conventional methods.
If they are so subtle and almost invisible, why are these differences important? These seemingly small differences could turn into something much larger and seriously impact performance under other, more real-world circumstances, where task complexity, fatigue, or distraction levels may be increased. Think of busy retail environments, longdistance truck driving, or even combat situations—you can’t always replicate the complicated, fast-moving world that users have to deal with when using a product or an interface.
The challenge is that you will often not know if more conventional measures will be able to detect any existing differences or if you should use a more sensitive instrument such as eye tracking. Based on the participant sample size, past experience with similar research, as well as knowledge of the environment where the tested object is typically used, you can only try to make an educated guess.
If negative implications of poor performance are high, you should consider supplementing the research with eye tracking to make sure all your bases are covered. Even if more conventional measures manage to reveal performance differences, the eye tracking data can then be used to support these other measures. For example, number of fixations can be used to support time-on-task data, and pupil diameter can be used to support results obtained with subjective workload assessment tools such as the NASA Task Load Index (NASA-TLX).