The Digital Economy. Tim Jordan

Чтение книги онлайн.

Читать онлайн книгу The Digital Economy - Tim Jordan страница 13

The Digital Economy - Tim Jordan

Скачать книгу

complications here and there have been developments over time, but the core practices of advertisers should by now be clear.

      The third set of intersecting activities, or third point of view, making up the economic practices of Google search are those of Google itself. These activities split between the structures set up by the company that allow it to offer services and mediate between search users and advertisers, and the implementation of those structures in the software/hardware that allows practices to be automated. This connection transformed Google and established one kind of digital economic practice as a money gusher, as noted earlier in the company’s turn from loss to profit once Adwords was implemented. To sustain this, Google’s economic practices have a dual character, with a never-ending process of improving search alongside never-ending developments in advertising.

      We have followed a single search from the point of view of the individual searcher, but from Google’s point of view things appear differently. Instead of the individual who searches, Google has to first see the collective and its social relations, which it can read to judge what search results to deliver. From this point of view, a search question is the last point of a search enquiry; it is what leads up to the delivery of certain results in a certain order that determines whether a search engine will be good or bad. This also highlights a recurrent frustration in trying to follow digital economic practices, as the algorithms and programs that fuel search engines are generally industry (or government) secrets. In the case of Google, however, the broad principles are known because its theoretical foundation, the PageRank algorithm, is publicly available (Page et al. 1999).

      To fully grasp the significance of this use of the World Wide Web we need to remember that what Google were (and are) reading through PageRank is a collectively created store of information to which anyone with access to the internet can add on topics of their choosing, including linking as website creators feel is appropriate. The WWW is created by following a set of formal standards that define how you have to form information and load it on a networked computer for it to be visible to other sites (as will be discussed further in Chapter 5). Once a website is visible other sites can link to that site just as anyone can link to their sites. The standards were released to be freely available and are maintained by a not-for-profit consortium. Much of the content that was created was done so freely by ordinary users with internet access and computing resources, though over time corporate and government sites run by paid employees have played a greater role. The WWW is then a collective creation formed of a series of groups that link to each other because they choose to do so in order to ensure that relevant information is connected and available. Although it was heavily commercialised once it became popular, the WWW preceded the birth of Google, and remains a space in which groups of people with similar interests can generate and share information resources (Berners-Lee 2000; Gillies and Cailliau 2000).

      PageRank was a means of reading these linked groups and their social relations. Once PageRank had read, for example, sites devoted to surfing it had evidence of the most important sites based on those who loved surfing and had created sites on the subject, including what those people thought were the most important sites and topics. This was the key work done in the initial Google search engine which can be drawn on when someone makes a surf-related search query. In this sense, any search query comes last in the practices of answering it, after the work has been done to read the relevant topics represented on the WWW.

      One of the best-known early additions to PageRank was the Random Surfer Model, which injected, as its name implies, randomness by assuming that at certain points anyone following web links would randomly jump to some other link. Further improvements were made, some in response to attempts to game the system and others to improve search results. For example, the Hilltop algorithm aims to divide the Web up into thematic sections and then judge if a site has links to it from experts who are not connected to that site. If many independent experts link to the site, then it is deemed an authority in its thematic area and can be used to judge the importance of other sites. Hilltop thus builds on citation practices while developing them in a specific direction. This algorithm was initially developed independently of Google and was bought by them to be integrated into its own set of tools. There are no doubt many other adjustments and wholly new algorithms integrated into PageRank and because of trade secrecy there will be more than we know about. But these examples are enough to establish the basic principle that, however it is implemented, Google’s successful search – successful both in terms of delivering useful results and in terms of popularity – derives from reading the creations of the pre-existing community of the World Wide Web (Turrow 2011: 64–8; Vaidhyanathan 2012: 60–4; Hillis et al. 2012).

Скачать книгу