Controversy Mapping. Tommaso Venturini

Чтение книги онлайн.

Читать онлайн книгу Controversy Mapping - Tommaso Venturini страница 20

Controversy Mapping - Tommaso Venturini

Скачать книгу

be read optimistically as an increase in the democratization of science and technology, even when it comes with annoying side effects. Reflecting on the raging debates about vaccination, danah boyd notices that, despite their unfortunate consequences for public health, the arguments of those who oppose the vaccines still reflect a proactive attitude:

      The more that the media focused on waving away these networks of parents through scientific language, the more the public felt sympathetic to the arguments being made by anti-vaxxers. Keep in mind that anti-vaxxers aren’t arguing that vaccinations definitively cause autism. They are arguing that we don’t know. They are arguing that experts are forcing children to be vaccinated against their will, which sounds like oppression. What they want is choice – the choice to not vaccinate. And they want information about the risks of vaccination, which they feel are not being given to them. In essence, they are doing what we taught them to do: questioning information sources and raising doubts about the incentives of those who are pushing a single message. (boyd, 2017)

      The bad news is that controversies within science and technology could also derive from the growing efforts of lobbies and interest groups to stall political action through the artificial production of uncertainty and hence a deliberate pollution of public debate. First employed by the tobacco industry to cast doubt on the connection between smoking and cancer (Oreskes & Conway, 2010), this strategy of skepticism is now employed on issues like climate change, acid rain and ozone depletion – often by the same organizations (Proctor & Schiebinger, 2008). This type of strategic skepticism, which is also used by foreign intelligence agencies, amplifies disagreements among experts making the debate opaque and undermining public trust in institutions (Asmolov, 2018; Bennett & Livingston, 2018).

      The investigation of controversies, however, is not only justified by the fact that they are increasingly difficult to ignore. Inconvenient as they may be, controversies are also excellent occasions to learn about the role of technoscience in social life. In the words of Bruno Latour:

      I have stopped, in the engineering school where I teach, to give a social science class:

      I only ask the young engineers to follow for one year, in real time, a scientific or technical controversy … They learn more science – meaning research – and it just happens that, without even noticing it, they learn also more law, economics, sociology, ethics, psychology, science policy and so on, since all those features are associated with the piece of science they have chosen to follow. (“From the two cultures debate to cosmopolitics” contribution to a special symposium in Zeit, available online at www.bruno-latour.fr).

      In the following pages, we will discuss three incentives for embarking on controversy mapping: (1) controversies allow the observation of scientific paradigms and technological infrastructures in the making; (2) they reveal the intended and unintended consequences of these paradigms and infrastructures; and (3) they help in taking more inclusive and reflexive decisions about these consequences.

      The most immediate reason for engaging with controversies is that they provide a vantage point from which to observe science and technology in action. One of the main challenges facing the study of technoscience is the self-evidence which shrouds its object of study ex post facto. Most of the time, we simply appropriate scientific facts and technological artifacts without caring about how they work or how they were made. This opacity is obviously convenient. Life would be very complicated if we had to think about Archimedean fluid mechanics and Roman aqueduct building every time we poured a glass of water. For every theorem that we learn to prove in school, there are thousands of others that we just believe or use without even knowing it. Most of the time, all we need to understand about a technology is what it takes as input and what it generates as output.

      The downside of this “black boxing” is that scholars interested in the social study of science and technology are faced with the problem that their objects of study are opaque in the sense that they offer few clues about the circumstances under which they came to be. Harry Collins (1975) illustrates this difficulty through the metaphor of ship models in glass bottles. If we were only able to observe the models once inside their bottles, we would have a hard time imagining how they passed through the bottleneck and thus conclude that these ships had always existed within their bottled universes:

      It is as though epistemologists are concerned with the characteristics of ships (knowledge) in bottles (validity) while living in a world where all ships are already in bottles with the glue dried and the strings cut. A ship within a bottle is a natural object in this world, and because there is no way to reverse the process, it is not easy to accept that the ship was ever just a bundle of sticks. (Collins, 1975, p. 205)

      “Black boxing” refers to the process by which technoscientific work is made invisible by its own success. When something works – a machine, a technology, a theory, a method – it is no longer questioned. Or rather: the fact that it is no longer questioned is the hallmark of its success. The construction is complete, the builders have left the site, and the ultimate testimony to their skill is to make us forget they were ever there. All of which is perfectly fine, as long as we just want to profit from the utility of technoscientific black boxes, but what if we are interested in knowing how they were put together or, even more poignantly, what if we need to know it?

      Something similar is true in science. In order to convince their peers, scientists are forced to make the details of their work explicit. They must publish their datasets, spell out their methods, describe their samples, document their algorithms, cite their predecessors, acknowledge their assumptions, declare their sources of funding. Consider how much we, as outsiders, have been able to learn about GMOs because opponents of those technologies have nitpicked every risk involved, forcing transparency on every detail of transgenesis. Consider how much we have learned about the physics of nuclear fission or the technology of reactors every time a proposal for a new plant has been contested or a referendum on nuclear energy has been announced (Nelkin, 1971; Callon et al., 2009). Or consider how intricate epidemiological discussions about herd immunity or curve flattening have become public knowledge during the Covid-19 pandemic (Munk, 2020).

      When we said that we did not have to bother about hydraulics when pouring a glass of water, we were of course excluding cases where disputes about water management (and more recently shale gas extraction) force us to consider the delicate mechanisms connecting aquifers to our kitchen tap. When everything goes well, technoscientific constructions are invisible, so to observe their inner workings, we need to seek out the situations where these constructions fail and controversies becomes the “can-openers” (Law, 2010) of science and technology:

      Most of the time, most of us take our technologies for granted. These work more or less adequately, so we don’t inquire about why or how it is they work. We don’t inquire about the design decisions that shape

Скачать книгу