Making Sense of AI. Anthony Elliott
Чтение книги онлайн.
Читать онлайн книгу Making Sense of AI - Anthony Elliott страница 6
There is more than one way in which the story of AI can be told. The term ‘artificial intelligence’, as we will examine in this chapter, consists of many different conceptual strands, divergent histories and competing economic interests. One way to situate this wealth of meaning is to return to 1956, the year the term ‘artificial intelligence’ was coined. This occurred at an academic event in the USA, the Dartmouth Summer Research Project, where researchers proposed ‘to find how to make machines use language, form instructions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.3 The Dartmouth Conference was led by the American mathematician John McCarthy, along with Marvin Minsky of Harvard, Claude Shannon of Bell Telephone Laboratories and Nathan Rochester of IBM. Why the conference organizers chose to put the adjective artificial in front of intelligence is not evident from the proposal for funding to the Rockefeller Foundation. What is clear from this infamous six-week event at Dartmouth, however, is that AI was conceived as encompassing a remarkably broad range of topics – from the processing of language by computers to the simulation of human intelligence through mathematics. Simulation – a kind of copying of the natural, transferred to the realm of the artificial – was what mattered. Or, at least, this is what McCarthy and his colleagues believed, designating AI as the field in which to try to achieve the simulation of advanced human cognitive performance in particular, and the replication of the higher functions of the human brain in general.
There has been a great deal of ink spilt on seeking to reconstruct what the Dartmouth Conference organizers were hoping to accomplish, but what I wish to emphasize here is the astounding inventiveness of McCarthy and his colleagues, especially their focus on squeezing then untrained and untested variants of scientific strategies and intellectual hunches anew into the terrain of intelligence designated as artificial. Every culture lives by the creation and propagation of new meanings, and it is perhaps not surprising – at least from a sociological standpoint – that the Dartmouth organizers should have favoured the term ‘artificial’ at a time in which American society was held in thrall to all things new and shiny. The era of 1950s America was of the ‘new is better’, manufactured as opposed to natural, shiny-obsessed sort. It was arguably the dawning of ‘the artificial era’: the epoch of technological conquest and ever more sophisticated machines, designated for overcoming problems of nature. Construction of various categories and objects of the artificial was among the most acute cultural obsessions. Nature was the obvious outcast. Nature, as a phenomenon external to society, had in a certain sense come to an ‘end’ – the result of the domination of culture over nature. And, thanks to the dream of infinity of experiences to be delivered by artificial intelligence, human nature was not something just to be discarded; its augmentation through technology would be an advance, a shift to the next frontier. This was the social and historical context in which AI was ‘officially’ launched at Dartmouth. A world brimming with hope and optimism, with socially regulated redistributions away from all things natural and towards the artificial. In a curious twist, however, jump forward some sixty or seventy years and it is arguably the case that, in today’s world, the term ‘artificial intelligence’ might not have been selected at all. The terrain of the natural, the organic, the innate and the indigenous is much more ubiquitous and relentlessly advanced as a vital resource for cultural life today, and indeed things ‘artificial’ are often viewed with suspicion. The construction of the ‘artificial’ is no longer the paramount measure of socially conditioned approval and success.
Where does all of this leave AI? The field has advanced rapidly since the 1950s, but it is salutary to reflect on the recent intellectual history of artificial intelligence because that very history suggests it is not advisable to try to compress its wealth of meanings into a general definition. AI is not a monolithic theory. To demonstrate this, let’s consider some definitions of AI – selected more or less at random – currently in circulation:
1 the creation of machines or computer programs capable of activity that would be called intelligent if exhibited by human beings;
2 a complex combination of accelerating improvements in computer technology, robotics, machine learning and big data to generate autonomous systems that rival or exceed human capabilities;
3 technologically driven forms of thought that make generalizations in a timely fashion based on limited data;
4 the project of automated production of meanings, signs and values in socio-technical life, such as the ability to reason, generalize, or learn from past experience;
5 the study and design of ‘intelligent agents’: any machine that perceives its environment, takes action that maximizes its goal, and optimizes learning and pattern recognition;
6 the capability of machines and automated systems to imitate intelligent human behaviour;
7 the mimicking of biological intelligence to facilitate the software application or intelligent machine to act with varying degrees of autonomy.
There are several points worth highlighting about this list. First, some of these formulations define artificial intelligence in relationship to human intelligence, but it must be noted that there is no single agreed definition, much less an adequate measurement, of human intelligence. AI technologies can already process our email for spam, recommend what films we might like to watch and scan crowds for particular faces, but these accomplishments do not signify comparison with human capabilities. It might, of course, be possible to make comparisons of AI with rudimentary numeric measurements of human intelligence such as IQ, but it is surely not hard to show what is wrong with such a case. There is a difference between the numeric measurement of intelligence and native human intelligence. Cognitive processes of reasoning may indeed provide a yardstick for assessing progress in AI, but there are also other forms of intelligence. How people intuit each other’s emotions, how people live with uncertainty and ambivalence, or how people gracefully fail others and themselves in the wider world: these are all indicators of intelligence not easily captured by this list of definitions.
Second, we may note that some of these formulations of AI seem to raise more questions than they can reasonably hope to answer. On several of these definitions, there is a direct equation between machine intelligence and human intelligence, but it is not clear whether this addresses only instrumental forms of (mathematical) reasoning or emotional intelligence. What of affect, passion and desire? Is intelligence the same as consciousness? Can non-human objects have intelligence? What happens to the body in equating machine and human intelligence? The human body is arguably the most palpable way in which we experience the world; it is the flesh and blood of human intelligence. The same is not true of machines with faces, and it is fair to say that all of the formulations on this list displace the complexity of the human body. These definitions are, in short, remorselessly abstract, indifferent to different forms of intelligence as well as detached from the whole human business of emotion, affect and interpersonal bonds.
Third, we can note that some of these formulations are sanguine, others ambiguously so, and some