Bots. Nick Monaco

Чтение книги онлайн.

Читать онлайн книгу Bots - Nick Monaco страница 11

Bots - Nick Monaco

Скачать книгу

low-profile products. Some spambots function like crawlers, trawling the internet, looking for accessible comments sections to load up with spam or scraping webpages for email addresses to spam with emails. Examples of these crawler-type email scrapers include the ActiveAgent and RoverBot examples discussed in the previous section (Hayati et al., 2009; Leonard, 1997b pp. 140–148). Other spambots target social media sites, overloading users with malicious links or product promotion (Keelan et al., 2010). While spam is normally aimed at making money rather than disseminating political messaging, networks of social spambots can be reappropriated for political messaging with the flip of a switch (Monaco, 2019a; Thomas et al., 2012).

      Since cyborgs are partially controlled by humans, they leave different, less predictable activity signatures than normal, fully automated bots. For this reason, they are often able to slip through social media companies’ cybersecurity and bot detection algorithms. In the past few years, they have become increasingly common as a tool for political messaging (Woolley, 2020a, p. 85); for example, during the 2019 US Democratic presidential primary debates, one cyborg called the YangGang RT bot retweeted mentions of candidate Andrew Yang (Monaco, 2019b). Another recent form of cyborg political activism and campaigning is the “Volunteer botnet” – the willing temporary donation of one’s social media account to be used as a bot for political campaigning (Woolley & Monaco, 2020). We’ll cover cyborgs in greater depth in our chapter on political bots.

      Automated agents often work in concert with one another in “botnets” (short for “bot networks”) – a network of computer programs that work together to accomplish the same goal. The networked bots’ functions need not be identical: often, the bots in a network perform complementary functions (Cresci, 2020). For example, imagine a small network of Twitter bots that promote the hashtag #TacoTuesday on Twitter. The network might have 100 bots split evenly into seeders and promoters, with the 50 seeder bots dedicated to sending out pre-composed tweets that include the hashtag #TacoTuesday and the remaining 50 promoter bots used to retweet and like posts from the seeders. None of the 100 bots necessarily need to follow each other in order to be considered a botnet – they only need to be working toward the same goal. This group of 100 bots is therefore a botnet, for they share the common goal of promoting #TacoTuesday.

      DDoS attacks work by vastly overloading a website, driving so much traffic to it that its infrastructure collapses – imagine 10,000 cars all trying to get off of a one-lane highway exit at once, or a lecture hall of 1,000 students all asking the professor a question at the exact same time. These DDoS attacks have gotten larger and larger, driving larger and larger amounts of traffic to sites via botnets, because there is an enormous and growing pool of devices available for compromise: the rapidly growing Internet-of-Things (IoT). IoT is a term used to describe internet-connected devices that we may not traditionally think of as computers – DVD players, refrigerators, smart doorbells, laundry machines, TVs, cars, drones, baby monitors, etc. Because these internet-connected mundane household appliances are rarely designed with cybersecurity in mind, they are far too easy to compromise and turn into botnets. For example, in 2016, the Mirai botnet used over 400,000 internet-connected devices to bring down servers at the French web hosting service OVH and the web application company Dyn. The attack disrupted the services of several popular websites, including Amazon, Netflix, the New York Times, and Twitter. (Most of the compromised devices were hacked using a list of just 62 default usernames and passwords commonly used on IoT devices (United States Cybersecurity & Infrastructure Security Agency, 2016).)

      We highlight these misuses and ambiguities in order to help the reader clearly understand what the term “bot” may mean when encountered in the wild. In this book, when we use the term bot, we will always be referring to a program that is partially or fully automated.

      Finally, there are a range of bot characteristics that can be used to describe a bot’s behavior or evaluate its intentions (Maus, 2017).

       Transparency – does the bot clearly state that it is an automated agent, or does it attempt to hide its automation, playing itself off as human?

       Degree of automation – is the bot automated all of the time? Do some of its actions only occur with human intervention? Can a human operate the bot while it is also performing other operations autonomously? (These questions all relate to the relative “cyborg-ness” of the bot.)

       Coordination with other bots – does this bot operate as part of a botnet or with other deceptive human users?

       Interaction and passivity – does this bot interact with or engage with human users in any way (likes, retweets, shares, conversation, etc.)? Are other users aware that the bot is present in the online environment? Does it silently surveil or collect data on other users or websites?

       Intent – what is the goal of this

Скачать книгу