Methodologies and Challenges in Forensic Linguistic Casework. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Methodologies and Challenges in Forensic Linguistic Casework - Группа авторов страница 11
6 Doyle, A. C. (1891). A scandal in Bohemia. Strand Magazine.
7 Frontini, F., Lynch, G., & Vogel, C. (2008). Revisiting the donation of Constantine. Proceedings of AISB Symposium on Style in Text: Creative Generation and Identification of Authorship, 7, University of Aberdeen.
8 Nini, A. (2018). Developing forensic authorship profiling. Language and Law/Linguagem e Direito, 5(2), 38–58. Retrieved May 26, 2021, from http://193.137.34.194/index.php/LLLD/article/view/6116
9 Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. Retrieved August 9, 2021, from https://arxiv.org/pdf/1811.10154.pdf?fbclid=IwAR01WIlfIiC1cgM99nhwIjAT0tHWYxHk7ZA_o9nEK9jJ75KdFMNZlv5Y0AU
10 Shuy, R. W. (1993). Language crimes: The use and abuse of language evidence in the courtroom. Wiley-Blackwell.
11 Solan, L. M. (2020). The forensic linguist: The expert linguist meets the adversarial system. In M. Coulthard, A. May, & R. Sousa-Silva (Eds.), The Routledge handbook of forensic linguistics (2nd ed., pp. 395–407). Routledge.
12 Solan, L. M., & Tiersma, P. M. (2005). Speaking of crime: The language of criminal justice. University of Chicago Press.
13 Stepney, P., & Thompson, N. (2020). Isn’t it time to start “theorising practice” rather than trying to “apply theory to practice”? Social Work in Action, 33(2). https://doi.org/10.1080/09503153.2020.1773420
2 The Starbuck Case: Methods for Addressing Confirmation Bias in Forensic Authorship Analysis
Tim Grant and Jack Grieve
THE STARBUCK CASE
In 2012, we were contacted by Nottinghamshire Police for assistance in a missing persons investigation. Nearly two years previously, Debbie married Jamie Starbuck1 following a relatively brief courtship. They told the family they had planned to leave the country shortly after the wedding and embark on a two-year around-the-world adventure, financed primarily by Debbie’s savings.
Over time, however, Debbie’s family grew concerned. She was writing much less regularly and with much less information than had been her practice on previous trips abroad, during which she regularly sent long emails detailing her travels. Getting increasingly worried, they contacted the police, who wrote separately to both Jamie and Debbie’s email addresses and had a brief exchange with both. These emails suggested that the couple had briefly separated but were essentially well. The email exchanges, however, led the police to become suspicious that Jamie was replying not only from his own email address but also from Debbie’s and writing as her. The police instigated a missing persons inquiry locally, nationally, and internationally and approached the Centre for Forensic Linguistics (CFL) at Aston University to seek assistance in analyzing the emails and to determine whether their suspicions were correct.
THE PROBLEM OF CONFIRMATION BIAS
Since Dror et al.’s (2005, 2006) work gave the issue prominence, confirmation bias in forensic evidence has received considerable attention. Dror’s work on fingerprint examiners (e.g., 2005) and even forensic DNA analysis (2011) demonstrates how the conclusions of honest, competent forensic scientists are subject to cognitive bias and will be influenced by the briefings they are given prior to carrying out their examinations. Dror’s work raises issues as to how such bias can be designed out of forensic practice. The UK Forensic Regulator’s (2015) guidance paper on cognitive bias in forensic evidence examinations discusses the risks and provides methods for risk reduction as well as specific examples across the forensic sciences. For example, with regard to fingerprint examination, the regulator suggests that processes should be examined and adapted to “remove or limit contextual information that adds no tangible value to the fingerprint examination process.” (Forensic Regulator, 2015, 8.5.3b).
With regard to discussion of the Starbuck case, the question this chapter addresses is the extent to which we can design processes in forensic authorship analysis that address issues of cognitive bias. We do this through the presentation of a specific case – the murder in 2010 of Debbie Starbuck – and of our role in that investigation, and we describe how we attempted to address potential biases through the procedures we adopted and then evaluate the effectiveness of those attempts.
Forensic authorship analysis is a broad discipline, but the main tasks can be characterized as comparative authorship analysis, where the aim is to provide evidence that helps attribute one or more texts to a particular author by comparison with writings of undisputed authorship (as carried out in the Starbuck case), and author profiling, where the aim is to identify the sociolinguistic background of the writer of a text. (See Grant & MacLeod, 2020, Ch. 6, for a discussion of these and other possible authorship analysis tasks.) Whatever the task, authorship analysis is a lot less secure against cognitive biases than many other forms of forensic examination, and, before turning to the specifics of the case, we examine here why this might be the case.
First, forensic authorship analysis is largely still the domain of individual experts, working from universities or small consultancies, which typically contain just one or two analysts. From cases reported in the literature, most analyses are carried out by individuals, not as part of an established institutional procedure with quality controls, such as verification of reports and opinions by a second expert. The individualistic nature of authorship analyses leaves them open to potential cognitive biases.
In terms of approach, there is still some division between analysts who adopt wholly computational approaches to authorship questions, which are sometimes referred to as stylometric approaches, and approaches that rely more on linguistic training and expertise in text analysis, which can be referred to as stylistic approaches. The vulnerability to bias remains whether the analysis is largely computational or depends on an individual’s skills in noticing or eliciting style features that might be indicative of the style of author A or author B, although the nature of this bias can change.
For the stylistic analyst, the risk of bias may be more obvious: unconscious bias can affect decisions to include or exclude a particular feature, and indeed noticing or not noticing a particular feature may be a source of bias. However, it is also the case that in stylometric analyses a series of design decisions also need to be made—biases can creep in with regard to the selection of comparison materials, the identification of features to be elicited, and the statistical or other methods of prediction applied. Argamon (2018) usefully discusses the various possible decision points and pitfalls for computational forensic authorship analysis, and every decision point is also a point at which conscious or unconscious bias can enter an analysis.
Second, unlike with the analysis of a fingerprint or a DNA sample, in authorship analyses it is often not possible to isolate the sampled material from the story of the investigation. In fingerprint examination, it may be possible, as advised by the UK Forensic Regulator, to avoid telling an examiner much of the contextual information about a crime, but in linguistic analysis it is often the case that the texts themselves contain part of the wider story of the investigation. This unavoidable knowledge of