Neuropolis: A Brain Science Survival Guide. Robert Newman
Чтение книги онлайн.
Читать онлайн книгу Neuropolis: A Brain Science Survival Guide - Robert Newman страница 11
‘You know, Doc, ever since the accident, if someone vexes me by like, fucking with my mind, I just go apeshit. Just lose it. Go fucking mental. I can’t keep a lid on my temper any more cos now I ain’t go no lid. Accident blew it off, know what I mean? I’m a fucking apeman, a wild man. So if someone like you was to, you know, say one thing and then the complete opposite? Well, let’s just say I wouldn’t be the only man in town with an iron bar in his head, you know what I mean? Now get off my property and take your mini fucking rockery with you!’
Nothing in the Phineas Gage story makes sense except in the light of phrenology, but phrenology is played down in popular retellings because Gage’s accident is supposed to represent a decisive break with the past. With a big bang and a cloud of smoke the new science of cortical localisation is born. A couple of years later in 1861 Paul Broca publishes ‘Sur le principe des localisations cérébrales’, in the Bulletin de la Société dAnthropologie, in which he announces to the world how reason and emotion are divvied up in the brain:
The most noble cerebral faculties have their seat in the frontal convolutions, whereas the temporal, parietal and occipital lobe convolutions are appropriate for the feelings, penchants and passions.
Broca’s schema betrays how both the new science of cortical localisation and the old science of bumpology share a common ancestor in the ancient Greek idea that Reason is a charioteer controlling the wild beasts of Passion. A line straight as a tamping iron runs from this Greek idea, through Broca relegating emotion to a penchant, and all the way to the Myth of The Supermax Brain. This tradition, I think, helps explain why Ramachandran bares his canines in such a ferocious snarl at the ‘worthless vagabond with absolutely no moral sense.’
Incidentally, no-one ever accuses the Rutland and Burlington Railroad Company bosses of having absolutely no moral sense, even though they never paid Phineas Gage one red cent in compensation. But that’s probably because rail bosses destitute of human decency were seen as just one more occupational hazard in the working life of a railway navvy, as this nineteenth-century American railroad song makes clear:
Last week a premature blast went off,
A mile in the air went Big Jim Goff.
When the next pay day came round
Jim Goff a dollar short was found.
When he asked what for, came this reply:
‘You’re docked for the time you was up in the sky!’
The benchmark for Artificial Intelligence (AI) is the famous Turing Test. Alan Turing’s 1950’s thought-experiment states that if a robot can convince you that you’re talking to another human being, then that robot can be said to have passed the Turing Test, thereby proving that there is nothing special about the human brain that a sufficiently powerful computer couldn’t do just as well.
Except the Turing Test proves no such thing. All it proves is that humans can be tricked, but everyone knew that already … except Alan Turing, alas, who in the last week of his life – and this is a true story – went to a funfair fortune-teller on Blackpool promenade. Nobody knows what the Gypsy Queen told him, but he emerged from her tent white as a sheet and killed himself two days later. But funfairs have had centuries of practice in the art of tricking punters.
Weirdly, a funfair nearly did for Isaac Newton. In a posthumous biographical sketch, his friend John Wickens says that when they went to Sturbridge County Fair, Newton had a complete meltdown, and was close to jettisoning his whole theory of how gravity acts on every object in the universe, after what Wickens describes as: ‘a frustrating hour at the coconut shy’.
In an interview with The Times about Artificial Intelligence, Brian Cox said:
There is nothing special about human brains. They operate according to the laws of physics. With a sufficiently complex computer, I don’t see any reason why you couldn’t build AI. We’ll soon have robot co-workers, the difference is we’ll even be taking them to the office party.
I wrote a letter to The Times. They didn’t print it. I don’t why. It was quite short. It just said: ‘No we fucking won’t’.
Emotional robots are a vision of the future to be found in the Gypsy Queen’s crystal ball but not in science. Not least because of these two uncontroversial scientific facts:
1. We are not machines, we are animals.
2. No experiment performed by anyone anywhere in the whole world at any time has found a shred of evidence to suggest the remotest possibility that a ‘sufficiently complex computer’ will ever be able to do literally the first thing that a mammalian brain does, and experience emotion.
We came crying hither.
Thou know’st the first time that we smell the air
We wawl and cry …
But to listen to AI cultists you’d think we were knee-deep in this sort of evidence. According to Radio 4’s Inside Science program, for example, we’ll soon have robot lawyers.
A senior IBM executive explained to Inside Science listeners that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant.
Interestingly, however, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused in 2010 of the largest hedge-fund insider trading in history, he hired one of those old-time humanoid defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew there’s no justice in automated law.
Not all the gigabytes in the world will ever make a set of algorithms a fair trial. There can be no justice in the broad sense without procedural justice in the narrow sense. Even if the outcome of a jury trial is identical to the outcome of an automated trial, due process leaves one verdict just and the other unjust. Justice entails being judged by flesh and blood citizens in a fair process. Not least because victims increasingly demand that the court consider their psychological and emotional suffering – which computers cannot do.
There’s a curious contradiction here that nobody ever talks about: at the same time as science proclaims its moral neutrality, proponents of AI want machines to become moral agents. Never more so than with what Nature has taken to calling ‘ethical robots’.
Ethical robots it seems will come as standard fittings on the driverless cars being developed by Apple, Google and Daimler. They will answer the big questions, automatically …
Should driverless cars be programmed to mount the pavement to avoid a head-on collision? Should they swerve to hit one person in order to avoid hitting two? Two instead of four? Four instead of a lorry full of hazardous chemicals? This is what the ‘ethical robot’ fitted into each driverless car will decide. How will it decide? In July 2015, Nature published an article, ‘The Robot’s Dilemma’, which explained how computer scientists:
have written a logic program that can successfully make a decision … which takes into account whether the harm caused is the intended result of the action or simply necessary to it.
Is the phrase ‘simply necessary’ chilling enough