Platforms and Cultural Production. Thomas Poell
Чтение книги онлайн.
Читать онлайн книгу Platforms and Cultural Production - Thomas Poell страница 6
We are thankful to the many colleagues and students who made this journey with us. We would like to express our gratitude first and foremost to a number of colleagues who generously helped us with their critical comments and generative ideas: Amanda Lotz, José van Dijck, Bernhard Rieder, and Dwayne Winseck. We are also thankful to our students, especially Maggie MacDonald and Ouejdane Sabbah, who read and commented on the first draft of the manuscript. Furthermore, we would like to thank the contributors to the Social Media + Society special collections, who provided us with new insights and rich case studies on which to draw: Arturo Arriagada, Sarah Banet-Weiser, Sophie Bishop, Tiziano Bonini, Robyn Caplan, Aymar Jean Christian, Samantha Close, David Craig, Stuart Cunningham, Faithe Day, Mark Díaz, Stefanie Duguay, Karin van Es, Maxwell Foxman, Alessandro Gandini, Tarleton Gillespie, Alison Hearn, David Hesmondhalgh, Emily Hund, Francisco Ibáñez, Mark R. Johnson, Ellis Jones, Daniel Joseph, Ji-Hyeon Kim, Jeroen de Kloet, Tamara Kneese, Jian Lin, Jeremy Wade Morris, Annemarie Navar-Gill, Victoria O’Meara, Michael Palm, William Clyde Partin, Chelsea Peterson-Salahuddin, Caitlin Petre, Robert Prey, Andreas Rauh, Marc Steinberg, John L. Sullivan, José Miguel Tomasena, Cynthia Wang, Jamie Woodcock, Chris J. Young, and Jun Yu.
Thanks as well go to the tutorial students, whose feedback helped us to enhance the focus of the book at an early stage of its conception: Lukas Beckenbauer, Jueling Hu, Daphne Idiz, Vanessa Richter, and Ziwen Tang. The students of the research master Media Studies at the University of Amsterdam, meanwhile, provided rich feedback on the first draft of the manuscript in the course of Research Practices in Media Studies (2020–1). Additionally, we are grateful to Mary Savigar, Sarah Dancy, Ellen MacDonald-Kramer, and Stephanie Homer from Polity, who patiently and steadily guided and supported us through the writing and production process. We would also like to acknowledge the generosity of a number of institutions that sponsored this project: the Social Sciences and Humanities Research Council of Canada (SSHRC), the Amsterdam Centre for Globalisation Studies (ACGS), the Cornell Center for Social Sciences (CCSS), the McLuhan Centre for Culture and Technology, Queensland University of Technology, the University of Amsterdam, the University of Toronto, and Cornell University. Collectively, such support made for an all the more generative collaborative process by allowing us to work together in person – at least, until most of the world became gripped by COVID-19. To this end, we would like to close by expressing our appreciation to the family members, friends, and colleagues who provided support, encouragement, and patience, especially as the trials of researching and writing a book intensified under the weight of a global pandemic. Heartfelt thanks, in particular, go to Emma, Jonathan, and Raphael Poell, Robert Shea Terrell, and Leslie Pilszak.
Amsterdam, Toronto, Ithaca, 2021
1 Introduction
“Big brands fund terror,” read the frontpage of the British daily newspaper The Times on February 9, 2017; below the arresting headline was a screengrab of an online ad that – unbeknownst to the client – appeared in a YouTube video openly endorsing jihadists (Mostrous, 2017). According to The Times investigation, YouTube’s automated system of placing ads had paired promotions for consumer products and charitable organizations with videos championing radical and terrorist groups, including the Islamic State and Combat 18, a pro-Nazi faction. Several weeks later, the Guardian followed up with a report on the six-figure sums that “hate preachers” had generated from YouTube’s unwitting arsenal of ad sponsors – among them household brands like L’Oréal, Sainsbury’s, Nissan, and even the Guardian itself (Neate, 2017). Indeed, the report chronicled a kaleidoscopic range of extremist content funded through the platform: anti-Western propaganda from a Salafi Muslim preacher, videos by former Ku Klux Klan imperial wizard David Duke, and anti-LGBTQ and anti-Semitic sentiments expressed by a fundamentalist pastor.
Asked to respond to the high-profile social media scandal, Ronan Harris, a representative for YouTube’s parent company Google, offered: “We believe strongly in the freedom of speech and expression on the web – even when that means we don’t agree with the views expressed” (Neate, 2017). While Harris went on to clarify that Google’s policies prohibit “videos with hate speech, gory or offensive content” from appearing adjacent to ads, he conceded that “we don’t always get it right.” Dissatisfied with Google’s rhetorical deflection, the Guardian – along with the BBC and the UK government – subsequently pulled all advertising from the video-sharing platform.
This move was among the catalysts for the so-called 2017 “Adpocalypse” – a term invoked by YouTube creators to describe the concerted efforts of brands to boycott YouTube advertising. In total, as many as 250 brands from the US and the UK threatened to halt their digital advertising campaigns. Confronted with such collective pushback, Google quickly changed YouTube’s policies to be more “advertiser-friendly” (Kumar, 2019). Among the changes in YouTube’s governance framework was an option for advertisers to exclude broad categories of content from appearing alongside their ads. These categories ranged from the descriptive – “live-streaming video” – to the eminently subjective “sensitive social issues,” defined as “discrimination and identity relations, scandals and investigations, reproductive rights, firearms and weapons, and more” (YouTube, 2020a).
While these changes appeased advertisers – at least temporarily – they introduced considerable angst and uncertainty into the professional lives of cultural producers, in particular those creators vying with one another to earn income from the oft-elusive YouTube Partner Program. Many creators abruptly found their content “demonetized,” meaning they would receive limited or no ad revenue in exchange for audience attention (Caplan & Gillespie, 2020). Creators who provided mere commentary on “sensitive” social issues were especially susceptible to financial retribution. The same applied to creators whose content contained “strong profanity used multiple times … even if bleeped or for comedy, documentary, news, or educational purposes” (YouTube, 2020b).
In addition to demonetizing content deemed contentious, YouTube substantially raised the threshold for participation in the Partner Program: only channels with at least 1,000 subscribers that had ratcheted up more than 4,000 public watch hours in the preceding year were allowed to participate (YouTube, 2020c). This policy update made it especially difficult for newcomers to generate income, while barring creators with smaller followings altogether. The exclusionary nature of YouTube’s advertising program was exacerbated by a new rule which stated that demonetized clips were only eligible to be reevaluated by a human reviewer if they had a minimum of 1,000 views within a week (YouTube, 2020c). For context, given the mind-blowing amount of material on YouTube – 500 hours of video are uploaded every minute1 – content categorization and labeling take place through automated, rather than human, systems of content moderation (Covington et al., 2016; see also, Kumar, 2019; Roberts, 2019).
Some of YouTube’s most visible creators publicly vocalized their indignation over the revised guidelines (Caplan & Gillespie, 2020). For instance, Philip DeFranco, Ethan Klein, and Felix “PewDiePie” Kjellberg, who run popular commentary channels catering to more than 115 million subscribers, all claimed to have lost a major