Instagram. Tama Leaver
Чтение книги онлайн.
Читать онлайн книгу Instagram - Tama Leaver страница 14
Political Interference in the US
In the wake of the 2016 Presidential elections in the US, it became apparent that both Facebook and Twitter had been utilized by Russian-based groups to manipulate public sentiment about the American elections both through traditional posts as well as paid advertising. Most notable of the groups investigated and named was the Russian-based Internet Research Agency (IRA). Perhaps less well known is the fact that Instagram was also an avenue through which the IRA delivered political messaging and electoral interference. Moreover, far from ending with President Trump’s election, research released in 2018 showed that not only that Instagram continued to be a conduit for political manipulation, but that after the interference on Facebook and Twitter was made public, the focus of the IRA’s activity shifted specifically, and in largest proportion, to Instagram (DiResta et al. 2018; Howard et al. 2018).
What was most notable about the IRA Instagram accounts is that they did not begin as political accounts, but rather built a distinct and reliable network through what appeared to be legitimate content, before periodically integrating explicitly political material which usually favoured then-candidate Donald Trump and denounced Democratic candidate Hilary Clinton. Of 133 IRA Instagram accounts mapped in a New Knowledge report (DiResta et al. 2018), the largest account, @blackstagram__ had over 300,000 followers, with their content receiving more than 27 million likes. The aim of this account appears to have been to sow distrust and discord amongst black communities and convince them that voting for any candidate was a waste of their time. Moreover, many of the most effective Instagram posts were memes of various kinds, some recognizable, some using the images of the presidential candidates, but all clearly conveying a political payload. Some IRA accounts went as far as to sell merchandise, both as a fundraising effort for their own messaging campaigns but also, crucially, as a means to gather both clearer personal data (name, exact address, credit card details) and as a marker of clear political leaning, as the purchasing of political merchandise is a very direct marker of political allegiance.
In 2017 when Facebook admitted they had detected and shut down a raft of IRA accounts on Facebook, they also, more quietly, acknowledged a further 170 IRA Instagram accounts had just been detected and removed (Isaac & Wakabayashi 2017). Notably, while less in overall volume, the IRA posts on Instagram appear to have been the most effective, on average provoking the largest measurable reactions in terms of likes and comments (DiResta et al. 2018; Howard et al. 2018). For Instagram, Facebook and all large online platforms, the question of political manipulation remains an ongoing and important challenge. Balancing the Silicon Valley ideology underpinning Facebook and Instagram, while operating across the globe, and attempting not to upset countries, citizens and political systems with differing, at times divergent, needs, is no small issue. Indeed, Facebook commented that most IRA content on their platforms, including on Instagram, did not obviously violate any of their policies, which is why the content persisted (Isaac & Wakabayashi 2017). Whether this material violated US laws, or whether new laws will come into place in the US or elsewhere, is also a topic of considerable debate.
It is clear and undeniable that Instagram is a space for political discussion, political debate and, to date at least, political manipulation. The fact that the IRA went to so much effort to utilize Instagram shows their belief in the value of the social ties that exist between Instagram users. Targeting Instagram confirms that Instagram matters as a realm of taste, politics and cultural knowledge, something explored in more detail in chapter 5. How Instagram, Facebook and others respond to the more explicit political uses, and misuses, of their platforms may well colour how much trust, and use, these platforms enjoy in years to come.
Molly Russell’s Suicide and Self-Harm Images on Instagram
As noted earlier, for several years, Instagram’s Community Guidelines explicitly banned a range of content, stating ‘any account found encouraging or urging users to … cut, harm themselves, or commit suicide will result in a disabled account without warning’ (quoted in Cobb 2017). Research on the impact of Instagram and other social media on mental health, self-harm and youth suicide emphasizes social media as an amplifier, but whether this is positive or negative depends as much on the user and context as anything else (Seabrook, Kern & Rickard 2016). Notably, even amongst calls to ban any mention of suicide or self-harm on large social media platforms, there is evidence that these are very effective platforms for suicide prevention messaging, and while depression, anxiety and social media are linked at times, there is no clear causation from one to the other (Robinson, Bailey & Byrne 2017).
In 2017, British teenager Molly Russell tragically took her own life. In early 2019, Molly Russell’s father very publicly and articulately argued that Instagram ‘helped kill my daughter’ after it was found she had been following a number of accounts that displayed and romanticized self-harm and suicide (Crawford 2019). In the wake of Molly Russell’s death, UK Health Secretary Matt Hancock very publicly called for social media companies to do more in removing self-harm and suicide content, linking increasing levels of teen self-harm and suicide with the rise of social media (Savage 2019). Instagram was singled out in particular as not doing enough to police the content children could access. After initially responding with a piece in The Independent promising to do more (Mosseri 2019a), new Instagram head Adam Mosseri had to elevate the platform’s response further in the next few days, promising that it will ‘not allow any graphic images of self-harm, such as cutting on Instagram’, and ‘will continue to remove it when reported’, even if this new commitment still relies on users reporting this content before it can be found and removed (Mosseri 2019b). At the same time, parent company Facebook made public statements reiterating that their approach to self-harm and suicide images was informed by experts across the globe, and that balancing removal with allowing inspirational stories of overcoming issues was a difficult one as many of the latter images are shown to help and support people suffering from mental health issues (Davis 2019). To try and ensure Instagram’s efforts were taken seriously, Mosseri met with the UK Health Secretary in an attempt to defuse the clearly political desire to more heavily regulate Instagram and other social media platforms (Lomas 2019). Yet, whether Instagram and Facebook can meaningfully police or deal with self-harm content remains to be seen.
Instagram by Facebook
In early 2019, Mark Zuckerberg’s plans to completely redesign the back end of his core properties Messenger, Facebook, Instagram and WhatsApp to integrate messaging across all four (Hern 2019) showed the clearest signs that the 2012 commitment to keep Instagram and Facebook functionally separate was all but forgotten. While Systrom and Krieger fought