Whether to Kill. Stephanie Dornschneider

Чтение книги онлайн.

Читать онлайн книгу Whether to Kill - Stephanie Dornschneider страница 16

Whether to Kill - Stephanie Dornschneider

Скачать книгу

science, counterfactuals30 have been defined as “subjunctive conditionals in which the antecedent is known or supposed for purposes of argument to be wrong” (Brian Skyrms, quoted in Tetlock and Belkin 1996: 4).31 They are considered to offer a convenient tool to explore whether “things could have turned out differently” (7).

      There is a general consensus among researchers from various fields that counterfactual analysis is “unavoidable” to explain phenomena that cannot be studied by controlled experiments that randomize the initial conditions (Tetlock and Belkin 1996: 6). There is, however, no consensus about how to engage in counterfactual analysis.32 Formalizing cognitive maps into DAGs provides a new approach to study counterfactuals.33 Specifically, it allows the researcher to intervene on the actors’ belief systems and test when they would have made different decisions had they held different beliefs. This bridges the gap between actors and structures by intervening on beliefs about the world, rather than on the world itself.

      Modeling Change in the External World

      External Interventions

      To model change in the world, Pearl introduces external interventions. To illustrate this, Pearl draws on a simple DAG, shown below. This DAG represents relationships between the seasons of the year (A), the falling of rain (B), the sprinkler being turned on (C), the pavement being wet (D), and the pavement being slippery (E) (15). Specifically, the DAG shows a directed order from A to E in which the season influences the falling of rain (A → B) and the turning on of the sprinkler (A → C); the falling of rain and turning on of the sprinkler in turn influence the pavement being wet (B → D and C → D); and the pavement being wet in turn influences the pavement being slippery (D → E).

      Figure 11. Example of a directed acyclical graph. Pearl 2000: 15.

      The directed order from A to E may be described as dependency (e.g., Spirtes 1995). It differs from other orders that do not address directed relationships. For example, consider flipping a coin multiple times: the result of one toss does not depend on the result of the previous toss.

      Specifically, there are two types of dependency conditions: (1) conditional dependencies between particular vertices connected by an edge, and (2) conditional in dependence between vertices that are not connected by an edge. For example, given three variables A, B, and C, one can say that A and B are independent if knowing A remains unchanged by knowing B. Formally this can be expressed as a conditional probability statement: P(A|B, C) = P(A|C). On the other hand, one can say that A is conditionally dependent on B if knowing B influences knowledge of A. Formally, this can be expressed as a conditional probability statement: P(A|B) = P(A,B)|P(B).

      The DAG above then illustrates the following condition of independence. Knowing that the pavement is wet (D) makes knowing that the pavement is slippery (E) independent of knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In short, knowledge of D establishes independence between E and A, B, C. On the other hand, knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C) does not make knowing the pavement is slippery (E) independent of knowing the pavement is wet (D). In short, E is conditionally dependent on D. This is the case because knowing the pavement is slippery (E) is directly dependent on knowing that the pavement is wet (E), but only indirectly dependent on knowing the season (A), whether it rains (B), or whether the sprinkler is turned on (C). In Pearl’s (2000: 21) vocabulary, the pavement’s being wet (B) “mediates” between the pavement’s being slippery (D) and whether it rains (B), the sprinkler is turned on (C), and the season (A).

      Figure 12. Example of an intervention. Pearl 2000: 23.

      Given these observations, Pearl models an external intervention in which the vertex representing knowledge about whether the sprinkler is on is defined as “SPRINKLER = ON.” This is visualized by Figure 12.

      This figure shows that intervening on C so that it is known that the sprinkler is on makes it possible to consider the effect of “SPRINKLER = ON” without considering A → C. In the figure, this is shown by the deletion of the arrow between A and C. Formally, this can be expressed by a change in the probability distributions representing this DAG. The probability distribution of this DAG before the intervention (Figure 11) can be represented as

      P(A, B, C, D, E) = P(A) P(B|A) P(C|A) P(D|B, C) P(E|D).

      The probability distribution of this DAG after the intervention (Figure 12) lacks P(C|A) due to knowledge of C and can be represented as

      PC=On(A, B, D, E) = P(A) P(B|A) P(D|B, C=On) P(E|D).

      The removal of A → C [P(C|A)] from the probability function is possible, because knowing C (that the sprinkler is on) makes it unnecessary to consider whether C and A (season) had an influence on C, as indicated by A → C. In Pearl’s words, “Once we physically turn the sprinkler on and keep it on, a new mechanism (in which the season has no say) determines the state of the sprinkler” (23).

      Drawing on such interventions, it becomes possible to study change in the external world. Specifically, it becomes possible to intervene on particular vertices that represent certain states of the world. Related to cognitive maps, it becomes possible to intervene on particular beliefs. As I show below, this allows me to explore when individuals would not have decided to take up arms.

      Causal Relationships

      Before relating external interventions to cognitive maps, it is important to note an important underlying assumption of external interventions—namely, that the edges in DAGs represent causal relationships. As Pearl observes, it is not possible to model external interventions by relying exclusively on probabilistic models.

      In this context, Pearl argues that DAGs by nature represent causal rather than probabilistic relationships, that their edges indicate “a stable and autonomous physical mechanism” (22). Concerning the example above, he says that the directed order from A to E is established by “causal intuition” (15). Following this intuition, one understands that the season influences the falling of rain (A → B) and the turning on of the sprinkler (A → C), that the falling of rain and turning on of the sprinkler in turn influences the pavement being wet (B → D and C → D), and that the pavement being wet in turn influences the pavement being slippery (D → E).

      According to Pearl, cause-effect connections of such physical mechanisms are so strong that it is “conceivable to change such [a connection] without changing the others” (22; emphasis in original). This allows Pearl to model an external intervention by defining a particular vertex as a particular state or thing, and to trace the effect of this intervention on what else is represented by the DAG (as indicated by the remaining vertices). Accordingly, it is no longer necessary to specify a new probability function that represents the impact of each intervention on all the other vertices. Instead, the external intervention requires only a “minimum of extra information” (22).

      External Interventions on Cognitive Maps

      Based on the similarity between DAGs and cognitive maps, it is possible to apply Pearl’s external intervention to cognitive maps. This allows

Скачать книгу