Bots. Nick Monaco

Чтение книги онлайн.

Читать онлайн книгу Bots - Nick Monaco страница 7

Bots - Nick Monaco

Скачать книгу

first-timers believed they were talking to a human on the other end of the terminal (Leonard, 1997, p. 52). Even after users were told that they were talking to a computer program, many simply refused to believe they weren’t talking to a human (Deryugina, 2010). At the first public demonstration of the early internet (the ARPANET) in 1971, people lined up at computer terminals for a chance to talk to ELIZA.

      ELIZA captured people’s minds and imaginations. When Weizenbaum first tested out ELIZA on his secretary, she famously asked him to leave the room so they could have a more private conversation (Hall, 2019). Weizenbaum, who had originally designed the bot to show how superficial human–computer interactions were, was dismayed by the paradoxical effect.

      This response was noteworthy enough to be dubbed the “ELIZA effect,” the tendency of humans to ascribe emotions or humanity to mechanical or electronic agents with which they interact (Hofstadter, 1995, p. 157).

      Other early bots did not have the glamor of ELIZA. For most of the 1970s and 1980s, bots largely played mundane but critical infrastructural roles in the first online environments. Bots are often cast in this “infrastructural” role,3 serving as the connective tissue in human–computer interaction (HCI). In these roles, bots often serve as an invisible intermediary between humans and computers that make everyday tasks easier. They do the boring stuff – keeping background processes running or chatrooms open – so we don’t have to. They are also used to make sense out of unordered, unmappable, or decentralized networks. As bots move through unmapped networks, taking notes along the way, they build a map (and therefore an understanding) of ever-evolving networks like the internet.

      This environment led to the creation of some of the first online bots: automated programs that helped maintain and moderate Usenet. As Andrew Leonard describes, “Usenet’s first proto-bots were maintenance tools necessary to keep Usenet running smoothly. They were cyborg extensions for human administrators” (Leonard, 1997, p. 157). Especially in the beginning days, bots primarily played two roles: one was posting, the other was removing content (or “canceling,” as it was often called on Usenet) (Leonard, 1996). Indeed, Usenet’s “cancelbots” were arguably the first political bots. Cancelbots were a Usenet feature that enabled users to delete their own posts. If a user decided they wanted to retract something they had posted, they could flag the post with a cancelbot, a simple program that would send a message to all Usenet servers to remove the content. Richard Depew wrote the first Usenet cancelbot, known as ARMM (“Automated Retroactive Minimal Moderation”) (Leonard, 1997, p. 161).

      Another prolific account on Usenet, Sedar Argic, posted political screeds on dozens of different news groups with astonishing frequency and volume. These posts cast doubt on Turkey’s role in the Armenian Genocide in the early twentieth century, and criticized Armenian users. Usenet enthusiasts still debate today whether the Argic’s posts were actually automated or not, but its high-volume posting and apparent canned response to keywords such as “Turkey” in any context (even on posts referring to the food) seem to point toward automation.

      Over time, more advanced social Usenet bots began to emerge. One of these was Mark V. Shaney, a bot designed by two Bell Laboratories researchers that made its own posts and conversed with human users. Shaney used Markov Chains, a probabilistic language generation algorithm, which strings together sentences based on what words are most likely to follow the words before it. The name Mark V. Shaney was actually a pun on the term Markov Chain (Leonard, 1997, p. 49). The Markov Chain probabilistic technique is still widely used today in modern natural language processing (NLP) applications (Jurafsky & Martin, 2018, pp. 157–160; Markov, 1913).

Скачать книгу