Handbook of Web Surveys. Jelke Bethlehem

Чтение книги онлайн.

Читать онлайн книгу Handbook of Web Surveys - Jelke Bethlehem страница 23

Handbook of Web Surveys - Jelke Bethlehem

Скачать книгу

(2017) found that the use of auxiliary variable coming from the Internet DB source highly correlated with the target variable does not guarantee enhancement of the quality of the estimates if selectivity affects the source. Bias may occur due to absence of some subgroups. Thus an analysis of the DB variable and the study of the relationship between populations covered or not by the DB source is a fundamental step to know how to use and which framework implement to assure high‐quality output.

      In conclusion, the approach that uses web scraping and administrative data together with the web survey looks to be promising; nevertheless quality results of the estimations are satisfactory only in some cases. The use of big data has to be carefully evaluated, especially if selectivity affects the source.

      Wells and Thorson (2015) introduce a novel method that combines a “big data” measurement of the content of individuals' Facebook (FB) news feeds with traditional survey measures to explore the antecedents and effects of exposure to news and politics content on the site. This hybrid approach is used to untangle distinct channels of public affairs content within respondents' FB news feeds.

      The authors explore why respondents vary in the extent to which they encounter public affairs content on the website. Moreover, they examine whether the amount and type of public affairs content flows in one's FB are associated with political knowledge and participation above and beyond self‐report measures of news media use.

      To combine a survey with measurements of respondents' actual FB experiences, they created a FB application (“app”) and embedded it within an online survey experience.

      Respondents, undergraduates at a large Midwestern public university, visited a web page and gave two sets of permissions: they first consented to be participants in a research study—a form required by the institutional review board—and then they separately approved the app through their FB profile. Once they approved the app, they were returned to the survey to complete the questionnaire. While respondents completed the questionnaire, the app recorded specific elements of their FB experience (with respondents' permission), such as how many friends they had, what pages they followed, and what content appeared in their news feeds during the previous week. When respondents had completed the survey, the app had finished its work and automatically removed itself from respondents' profiles. This research was approved by a standard university institutional research board and was designed to comply with FB's Platform Policies and Statement of Rights and Responsibilities, each of which placed restrictions on the use and presentation of the data.

      The resulting database offers an original combination of respondent's self‐reported attitudes and media behaviors (including FB experience) with measure of part of their FB experience.

      From the statistical point of view, the study has limitations (Beręsewicz et al., 2018; Biffignandi and Signorelli, 2016). The empirical study is run on a small sample of college volunteers. Thus, they have no claim of representativeness. In addition they have considered only a single information platform (FB). Other limitations suggest to consider the results just as a first experimental research. However, the approach proposed is in line with interesting methodological innovations toward the combination of social media trace with conventional methods. It opens the perspective to better understand big data and then try to relate big data descriptive information to socioeconomic theoretical hypotheses.

      1.2.6 HISTORIC SUMMARY

      The history above shows that technology changes have impacted survey taking and methods:

       Paper questionnaires were exclusively used for decades until the 1970s and 1980s for both self‐completion and by interviewers. Processing the data was expensive and focused on eliminating survey‐taking mistakes.

       Computer questionnaires at first were used solely for interviewing, while paper questionnaires were still used for self‐completion.

       The advent of the Internet meant that self‐completion could now be computer based, but this was limited at first to browsers on PC.

       Computing advances in hardware, software, and connectivity enabled and forced changes in survey taking, processing, and methods.

      1.2.7 PRESENT‐DAY CHALLENGES AND OPPORTUNITIES

      In the past 15 years, rapid technical and social changes have introduced a number of challenges and opportunities. The following is a high‐level list of challenges:

       The respondent is much more in charge of the survey including whether and how he/she will participate.

       There is such a vast proliferation of computing devices and platforms that survey takers cannot design and test for each possible platform.

       Modern‐day surveys must be accessible to all self‐respondents, including the blind, visually impaired, and the motor impaired.

       Few survey practitioners have all the skills needed to effectively design surveys for all platforms and to make them accessible at the same time.Pierzchala (2016) listed a number of technical challenges that face survey practitioners. This list was developed to communicate the magnitude of the challenges. The term multis refers to the multiple ways that surveys may have to adapt for a particular study:

       Multicultural surveys: There are differences in respondent understanding, values, and scale spacing due to various cultural norms. These can lead to different question formulation or response patterns.

       Multi‐device surveys: There are differences in questionnaire appearance and function on desktops, laptops, tablets, and smartphones.

       Multilingual surveys: There are translations, system texts, alphabetic versus Asian scripts, left‐to‐right versus right‐to‐left scripts, and switching languages in the middle of the survey.

       Multimode surveys: There are interviewer‐ and self‐administered surveys such as CATI and CAPI for interviewers and browser and paper self‐completion modes (Pierzchala, 2006).

       Multinational surveys: There are differences in currency, flags and other images, names of institutions, links, differences in social programs, and data formats such as date display.

       Multi‐operable surveys: These are differences in how the user interacts with the software and device including touch and gestures versus keyboards with function keys. Whether there is a physical keyboard or a virtual keyboard impacts screen space for question display.

       Multi‐platform surveys: These are differences in computer operating systems, whether the user is connected or disconnected to/from the server, and settings

Скачать книгу