We Humans and the Intelligent Machines. Jörg Dräger
Чтение книги онлайн.
Читать онлайн книгу We Humans and the Intelligent Machines - Jörg Dräger страница 6
When algorithms become political
Public debates and democratic decisions are sometimes necessary even in cases where one would not immediately suspect it. Navigation systems that display accidents and recommend detours have become an indispensable part of any car or smartphone. They used to recommend the same route to everyone when traffic jams occurred – leading in many cases to congested detours. Today, navigation systems redirect motorists to different routes depending on the current flow of traffic, reducing traffic load.
An interesting question from the policy perspective is which alternatives the navigation system is allowed to offer. If it is set to only show the quickest way, it might lead drivers through residential areas. At present, citizens’ initiatives are already being launched to block certain roads for through traffic and remove these shortcuts from route-planning software.5
And here is an intriguing thought experiment: Let us assume that a highway is to be temporarily closed and there will be a short and a long detour, both of which are needed to keep the traffic flowing. Which criteria should the navigation algorithm use to make its recommendation? An ecologically oriented programmer would perhaps specify that the fuel-efficient cars should be shown the longer route and the gas guzzlers the shorter. After all, this would protect the environment. However, it would not be fair from a social perspective if people with expensive luxury cars reached their destination faster than others. An algorithm optimized for fairness would probably be programmed to make a random choice about who is shown the long detour and who sees the short one. This in turn would not be the best alternative in terms of environmental impact. There is no clear right or wrong here; a policy choice is needed. And this should not be left to the car manufacturers or programmers, but should be discussed publicly.
Distorted images of a superintelligence
When we talk about algorithms, the term artificial intelligence (AI) quickly comes up. This refers to computer programs designed to imitate the human ability to achieve complex goals. In reality, however, AI systems have so far been anything but intelligent; instead, they are machines well trained for solving very specific tasks. People have to define the tasks and train the devices, because an algorithm does not know on its own whether a photo depicts a dog or a house or whether a poem was written by Schiller or a student in elementary school. The more specific the task and the more data the algorithm can learn on, the better its performance will be.
In contrast to human intelligence, however, AI is not yet able to transfer what it has learned to other situations or scenarios. Computers like Deep Blue can beat any professional chess player, but would initially have no chance in a game on a larger board with nine times nine instead of eight times eight squares. Another task, such as distinguishing a cat from a mouse, would completely overwhelm these supposedly intelligent algorithms. According to industry experts, this ability to transfer acquired knowledge will remain the purview of humans for the foreseeable future.6 Strong AI, also called superintelligence by some, which can perform any cognitive task at least as well as humans, remains science fiction for the time being. When we talk about AI in this book, we therefore mean what is known as weak or narrow AI which can achieve a limited number of goals set by humans.
The debate about artificial intelligence includes many myths. Digital utopians and techno-skeptics both sketch out visions of the future which are often diametrically opposed. Some consider the emergence of superintelligence in the 21st century to be inevitable, others says it is impossible. At present, nobody can seriously predict whether AI will ever advance to this “superstate.”7 In any event, the danger currently lies less in the superiority of machine intelligence than in its inadequacy. If algorithms are not yet mature, they make mistakes: Automated translations produce nonsense (hopefully not too often in this book), and self-driving cars occasionally cause accidents that a person at the wheel might have avoided.
Instead of drawing a dystopian distortion of AI and robots, we should put our energy into the safe and socially beneficial design of existing technologies. In the thriving interaction of humans and machines, the strengths and weaknesses of both sides can be meaningfully balanced. This is exactly the subject examined in the following two chapters.
3People make mistakes
“Artificial intelligence is better than natural stupidity.” 1
Wolfgang Wahlster, Former Director of the
German Research Center for Artificial Intelligence
To err is human. This well-known saying provides consolation when something fails; at the same time, it seems to dissuade us from pursuing perfection. A mistake can even have a certain charm, especially when a person is self-deprecating about her own fallibility. But the original Latin phrase, from which the saying derives, is longer than just the first words. Written by the theologian Saint Jerome more than 1,600 years ago, the complete quotation is: Errare humanum est, sed in errare perseverare diabolicum. To err is human, but to persist in error is diabolical.
As sympathetic as a small lapse that does not entail any serious consequences might seem, systematic misjudgments are tragic when they relate to existential questions. Cancer diagnoses, court decisions, job hires – generosity should not be the watchword here when it comes to avoidable mistakes.
Algorithms can help when people reach their cognitive limits. There is an increasing need for algorithmic support, especially in areas that are particularly important to society, such as medicine or the judiciary. On the one hand, psychological research has shown that the quality of human decisions is suboptimal even when the decisions are of great significance and made by experts. On the other, big data and the computer power for processing it have led to new ways of optimizing diagnoses, analyses and judgments.
While scientists have become more adept at understanding the limits of our cognitive abilities, advances in IT are making more information available to us. Evaluating that information, however, is becoming increasingly challenging, even overwhelming, for human brains. To refuse ourselves the support machines can provide would mean to persist in error. By accepting such support, we could overcome our intellectual limitations, which get expressed as information overload, flawed reasoning, inconsistency and the feeling of being overwhelmed when dealing with complex situations. To refrain from doing so would not be human as described by Saint Jerome, but diabolical.
Information overload: Drowning in the flood of data
The radiology department at the University Hospital in the German town of Essen is nothing but a huge data-processing machine. It is big enough that visitors can take an extended stroll through the premises. The rooms on the right and left of the long corridor are, even now, on a sunny afternoon, dim and dark. With the blinds closed, radiologists sit in front of large monitors and process data. They are the central processing units of radiology. The specialists click through information: patient files, x-rays, scans, MRIs. In one room, images of the brain of a stroke patient flicker across the monitors while, next door, cross-sectional images of a lung with metastases are examined.
The radiologists at the hospital look at a good 1,000 cases per day. The amount of information they have to process has multiplied in recent years – and not only in Essen. Researchers at Mayo Clinic in the United States have evaluated 12 years’ worth of the organization’s data and duty rosters. During that time, not only did the number of annual examinations almost double, the volume of recorded