Machine Habitus. Massimo Airoldi

Чтение книги онлайн.

Читать онлайн книгу Machine Habitus - Massimo Airoldi страница 9

Machine Habitus - Massimo Airoldi

Скачать книгу

self-driving cars and recommendation systems, and have enabled the recent expansion of fields such as pattern recognition, machine translation or image generation. In 2015, an AI system developed by the Google-owned company DeepMind was the first to win against a professional player at the complex game of Go. On the one hand, this landmark was a matter of increased computing power.3 On the other hand, it was possible thanks to the aforementioned qualitative shift from a top-down artificial reasoning based on ‘symbolic deduction’ to a bottom-up ‘statistical induction’ (Pasquinelli 2017). AlphaGo – the machine’s name – learned how to play the ancient board game largely on its own, by ‘attempting to match the moves of expert players from recorded games’ (Chen 2016: 6). Far from mechanically executing tasks, current AI technologies can learn from (datafied) experience, a bit like human babies. And as with human babies, once thrown into the world, these machine learning systems are no less than social agents, who shape society and are shaped by it in turn.

      A considerable amount of research has also asked how and to what extent the output of algorithmic computations – automated recommendations, micro-targeted ads, search results, risk predictions, etc. – controls and influences citizens, workers and consumers. Many critical scholars have argued that the widespread delegation of human choices to opaque algorithms results in a limitation of human freedom and agency (e.g. Pasquale 2015; Mackenzie 2006; Ananny 2016; Beer 2013a, 2017; Ziewitz 2016; Just and Latzer 2017). Building on the work of Lash (2007) and Thrift (2005), the sociologist David Beer (2009) suggested that online algorithms not only mediate but also ‘constitute’ reality, becoming a sort of ‘technological unconscious’, an invisible force orienting Internet users’ everyday lives. Other contributions have similarly portrayed algorithms as powerful ‘engines of order’ (Rieder 2020), such as Taina Bucher’s research on how Facebook ‘programmes’ social life (2012a, 2018). Scholars have examined the effects of algorithmic ‘governance’ (Ziewitz 2016) in a number of research contexts, by investigating computational forms of racial discrimination (Noble 2018; Benjamin 2019), policy algorithms and predictive risk models (Eubanks 2018; Christin 2020), as well as ‘filter bubbles’ on social media (Pariser 2011; see also Bruns 2019). The political, ethical and legal implications of algorithmic power have been discussed from multiple disciplinary angles, and with a varying degree of techno-pessimism (see for instance Beer 2017; Floridi et al. 2018; Ananny 2016; Crawford et al. 2019; Campolo and Crawford 2020).

      The notion of ‘feedback loop’ is widely used in biology, engineering and, increasingly, in popular culture: if the outputs of a technical system are routed back as inputs, the system ‘feeds back’ into itself. Norbert Wiener – the founder of cybernetics – defines feedback as ‘the property of being able to adjust future conduct by past performance’ (1989: 33). According to Wiener, feedback mechanisms based on the measurement of performance make learning possible, both in the animal world and in the technical world of machines – even when these are as simple as an elevator (1989: 24). This intuition turned out to be crucial for the subsequent

Скачать книгу