Machine Habitus. Massimo Airoldi

Чтение книги онлайн.

Читать онлайн книгу Machine Habitus - Massimo Airoldi страница 8

Machine Habitus - Massimo Airoldi

Скачать книгу

      With supercomputers making their appearance in companies and universities, the automated processing of information became increasingly embedded into the mechanisms of post-war capitalism. Finance was one of the first civil industries to systematically exploit technological innovations in computing and telecommunications, as in the case of the London Stock Exchange described by Pardo-Guerra (2010). From 1955 onwards, the introduction of mechanical and digital technologies transformed financial trading into a mainly automated practice, sharply different from ‘face-to-face dealings on the floor’, which had been the norm up to that point.

      From the late 1970s, the development of microprocessors and the subsequent commercialization of personal computers fostered the popularization of computer programming. By entering people’s lives at work and at home – e.g. with videogames, word processors, statistical software, etc. – computer algorithms were no longer the reserve of a few scientists working for governments, large companies and universities (Campbell-Kelly et al. 2013). The digital storage of information, as well as its grassroots creation and circulation through novel Internet-based channels (e.g. emails, Internet Relay Chats, discussion forums), translated into the availability of novel data sources. The automated processing of large volumes of such ‘user-generated data’ for commercial purposes, inaugurated by the development of the Google search engine in the late 1990s, marked the transition toward a third era of algorithmic applications.

      Furthermore, while in the Digital Era algorithms were commercially used mainly for analytical purposes, in the Platform Era they also became ‘operational’ devices (A. Mackenzie 2018). Logistic regressions such as those run in SPSS by statisticians in the 1980s could now be operationally embedded in a platform infrastructure and fed with thousands of data ‘features’ in order to autonomously filter the content presented to single users based on adaptable, high-dimensional models (Rieder 2020). The computational implications of this shift have been described by Adrian Mackenzie as follows:

      if conventional statistical regression models typically worked with 10 different variables […] and perhaps sample sizes of thousands, data mining and predictive analytics today typically work with hundreds and in some cases tens of thousands of variables and sample sizes of millions or billions. The difference between classical statistics, which often sought to explain associations between variables, and machine learning, which seeks to explore high-dimensional patterns, arises because vector spaces juxtapose almost any number of features. (Mackenzie 2015: 434)

Скачать книгу