Outsmarting AI. Brennan Pursell
Чтение книги онлайн.
Читать онлайн книгу Outsmarting AI - Brennan Pursell страница 5
Yes, robots have been developed for sex, and the company that makes them opened a robot brothel near New York, but was stopped in Houston. We have no more to write about this subject, but we have to point out the obvious: No robot can love. Nor can any AI system. As much as some people may adore them, robots will never love them back. AI guru Kai-Fu Lee seems to have needed a life-threatening bout with cancer to realize this truth.[2]
Myth 2: AI Knows What It’s Doing
The other popular myth is that AI is a hazard because it “wants” something—that is, to replace humans. Very prominent US entrepreneurs such as Elon Musk have issued warnings along these lines. Futurists who predict a coming “superintelligence” warn that AI or machine intelligence will outstrip the human in due time, with dire consequences.[3] Once we figure out “general artificial intelligence,” others claim, it will then figure out that it does not need us. Because it has to be plugged in in order to function, will start to defend itself from humans, using every conceivable means to keep the electricity on.
An AI system doesn’t “want” anything. It lacks volition—a will. It is a mathematical object that works to attain the goals defined by its programmers.
AI performance at rule-bound games, such as chess, Go, Jeopardy, Dota 2, and other competitive eSports, depends entirely on the data sets, rules, and goals established by the programmers. The appropriate means to victory do not really matter as long as the rules allow them. In a boating game experiment, the AI was extensively trained in the program, and it proved victorious, but only by crashing its boat into the wall as many times as possible.
AI can “learn” the software, not the spirit of the game, or competition, or camaraderie. AI can play well enough alone, but its record for team playing is abysmal. Some observers of the AI vs. human Dota 2 video game showdown remarked that the AI character pulled moves “as if guided by an alien.” The more-accurate statement would be that it had mastered the software as directed, untrammeled by human hands on a controller. Of course audience members saw moves no human could do.
Don’t worry at all about AI having designs. Do worry about human stupidity, carelessness, and malice. Name a technology, any technology, any part of the great and growing human tool set since from the end of the last Ice Age about twelve thousand years ago that has not been abused. With computer software came the viruses. Tech militants who argue that AI systems should set the targets and decide the launches as well as guide the missiles are begging for hell. Don’t let them run the planet.
AI requires human intelligence and good common sense to function well. In 2016, developers at Microsoft notoriously released a chatbot called “Tay” that was supposed to learn language use from millennials on social media and pass it on liberally, actually, with no filters. In a matter of days, Tay tweeted, “feminists . . . should all die and burn in hell” and “Hitler was right.” Obviously the company disabled it for “adjustments.” This episode was enormously embarrassing for Microsoft, but what on earth were the project managers thinking?
Like teenagers, technologists sometimes do things just because they are “cool,” like winning at Jeopardy using an immense customized database and a natural language interface, or winning at chess using a similar approach, or a video game, again, with vast amounts of data, precision, and speed that a human couldn’t hope to match or exceed. But what value does this have for actual, working people besides entertainment and shock value?
So the real danger may be plain old negligence, thoughtless failures in AI design, failure to understand systems thoroughly before we fully commercialize them. AI may seem new and shiny, but greed, fear, and laziness are the old ways to distort, destroy, and demonize new things.
Think of the resourceful young minds at MIT that put together “Norman” and proudly proclaimed “the World’s First Psychopath AI.”[4] Norman was trained to respond to the inkblot images of the Rorschach test with macabre and even grisly captions. Associating text with images is now a normal AI function. Norman serves a very important point that we emphasize throughout the book: AI performance is no better than the data on which it was trained and parameters (rules) by which it operates. Norman was programmed, you can say, to make the associations it does. There is nothing independent, or psychopathic, about Norman’s associations, or those of any AI system. Psychopathy is a human problem.
Myth 3: AI Is Inescapable
Only death is inescapable—and taxes.
Yes, your organization can certainly do well enough without AI, as you have in the past, but you place yourself at a competitive disadvantage if you reject the best available tools. We are not trying to stoke FOMO (fear of missing out). You want to solve your business problems, alleviate the pain points, and boost your productivity and performance.
AI applications are spreading like wildfire through almost every sector of the economy. The smoke of real disruption can’t be missed. Some AI software-as-a-service (SaaS) offerings leave older solutions behind in the dust. Some are just smoke and mirrors.
AI will not control everything. It is a human tool. It will never tell you how to live a good life or run your business well. It’s not going to take over the world.
Yes, there are plenty of imaginative people who claim that it will one day, but they should listen to Geoffrey Hinton, who in 1986 laid the path for AI development with his backpropagation algorithms. (I will go over these in chapter 2.) In an interview in 2017, Hinton flat-out denied that backpropagation will lead computers to learn independently, without supervision, as small children do. “I don’t think it’s how the brain works,” he said. “My view is throw it all away and start again.”[5]
Why would anyone want to try and replicate human intelligence in a machine anyway? Aren’t we people maddeningly unpredictable enough? Let’s just get machines to do more of the backbreaking, boring work. This trend has been going on for roughly three centuries. Let’s keep it up, keep our heads, and do it responsibly.
Myth 4: AI Has Insight
People claim that AI “perceives,” “learns,” “understands,” “comprehends,” and, worst of all, “discerns hidden patterns” in data, as if it had some kind of inherent insight. Referring to groups of AI algorithms as “deep learning” and “deep belief networks” doesn’t help.
AI algorithms churn through numbers without a clue as to what they refer to. They have no idea about the difference between correlation and causation, they have no understanding of context, and they are notoriously bad at analyzing what-ifs—how things might be if we imagine circumstances different from what they are.
AI applications should be predictable, transparent, explicable, rational, and, above all, accurate. No one has any need for more software that classifies things incorrectly, returns false answers, and makes bad predictions.
Backpropagation algorithms on which AI, “deep learning,” and “neural networks” are based, take input numbers, make calculations based on them in “hidden layers,” and generate output numbers. You “train” the system by telling it what outputs it should produce, given the inputs. The algorithm then automatically adjusts the calculations in the “hidden layers” to produce the desired output. There can be just one or two to many of these hidden layers. I’ll