Machine Habitus. Massimo Airoldi

Чтение книги онлайн.

Читать онлайн книгу Machine Habitus - Massimo Airoldi страница 6

Machine Habitus - Massimo Airoldi

Скачать книгу

leaving aside the simplifications of popular media and the wishful thinking of techno-chauvinists, this is true for the most part (Broussard 2018; Sumpter 2018). Yet, many sociologists and social scientists continue to ignore algorithms and AI technologies in their research, or consider them at best a part of the supposedly inanimate material background of social life. When researchers study everyday life, consumption, social interactions, media, organizations, cultural taste or social representations, they often unknowingly observe the consequences of the opaque algorithmic processes at play in digital platforms and devices (Beer 2013a). In this book, I argue that it is time to see both people and intelligent machines as active agents in the ongoing realization of the social order, and I propose a set of conceptual tools for this purpose.

      Why only now?, one may legitimately ask. In fact, the distinction between humans and machines has been a widely debated subject in the social sciences for decades (see Cerulo 2009; Fields 1987). Strands of sociological research such as Science and Technology Studies (STS) and Actor-Network Theory (ANT) have strongly questioned mainstream sociology’s lack of attention to the technological and material aspects of social life.

      In 1985, Steve Woolgar’s article ‘Why Not a Sociology of Machines?’ appeared in the British journal Sociology. Its main thesis was that, just as a ‘sociology of science’ had appeared problematic before Kuhn’s theory of scientific paradigms but was later turned into an established field of research, intelligent machines should finally become ‘legitimate sociological objects’ (Woolgar 1985: 558). More than thirty-five years later, this is still a largely unaccomplished goal. When Woolgar’s article was published, research on AI systems was heading for a period of stagnation commonly known as the ‘AI winter’, which lasted up until the recent and ongoing hype around big-data-powered AI (Floridi 2020). According to Woolgar, the main goal of a sociology of machines was to examine the practical day-to-day activities and discourses of AI researchers. Several STS scholars have subsequently followed this direction (e.g. Seaver 2017; Neyland 2019). However, Woolgar also envisioned an alternative sociology of machines with ‘intelligent machines as the subjects of study’, adding that ‘this project will only strike us as bizarre to the extent that we are unwilling to grant human intelligence to intelligent machines’ (1985: 567). This latter option may not sound particularly bizarre today, given that a large variety of tasks requiring human intelligence are now routinely accomplished by algorithmic systems, and that computer scientists propose to study the social behaviour of autonomous machines ethologically, as if they were animals in the wild (Rahwan et al. 2019).

      With the recent emergence of a multidisciplinary scholarship on the biases and discriminations of algorithmic systems, the interplay between ‘the social’ and ‘the technical’ has become more visible than in the past. One example is the recent book by the information science scholar Safiya Umoja Noble, Algorithms of Oppression (2018), which illustrates how Google Search results tend to reproduce racial and gender stereotypes. Far from being ‘merely technical’ and, therefore, allegedly neutral, the unstable socio-technical arrangement of algorithmic systems, web content, content providers and crowds of googling users on the platform contributes to the discriminatory social representations of African Americans. According to Noble, more than neutrally mirroring the unequal culture of the United States as a historically divided country, the (socio-)technical arrangement of Google Search amplifies and reifies the commodification of black women’s bodies.

      I believe that it should be sociology’s job to explain and theorize why and under what circumstances algorithmic systems may behave this way. The theoretical toolkit of ethology mobilized by Rahwan and colleagues (2019) in a recent Nature article is probably not up to this aim, for a quite simple reason: machine learning tools are eminently social animals. They learn from the social – datafied, quantified and transformed into computationally processable information – and then they manipulate it, by drawing probabilistic relations among people, objects and information. While Rahwan et al. are right in putting forward the ‘scientific study of intelligent machines, not as engineering artefacts, but as a class of actors with particular behavioural patterns and ecology’ (2019: 477), their analytical framework focuses on ‘evolutionary’ and ‘environmental’ dimensions only, downplaying the cornerstone of anthropological and sociological explanations, that is, culture. Here I argue that, in order to understand the causes and implications of algorithmic behaviour, it is necessary to first comprehend how culture enters the code of algorithmic systems, and how it is shaped by algorithms in turn.

      A second, qualitative shift concerns the types of machines and AI technologies embedded in our digital society. The development and industrial implementation of machine learning algorithms that ‘enable computers to learn from experience’ have

Скачать книгу