Artificial Intelligence for Marketing. Sterne Jim

Чтение книги онлайн.

Читать онлайн книгу Artificial Intelligence for Marketing - Sterne Jim страница 8

Artificial Intelligence for Marketing - Sterne Jim

Скачать книгу

being to come to harm.

      2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

      3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Max Tegmark, president of the Future of Life Institute, ponders what would happen if an AI

      is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI's goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a(n) ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.17

      If you really want to dive into a dark hole of the existential problem that AI represents, take a gander at “The AI Revolution: Our Immortality or Extinction.”18

       Intentional Consequences Problem

      Bad guys are the scariest thing about guns, nuclear weapons, hacking, and, yes, AI. Dictators and authoritarian regimes, people with a grudge, and people who are mentally unstable could all use very powerful software to wreak havoc on our self‐driving cars, dams, water systems, and air traffic control systems. That would, to repeat Mr. Musk, obviously be quite bad.

      That's why the Future of Life Institute offered “Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” which concludes, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”19

      In his 2015 presentation on “The Long‐Term Future of (Artificial) Intelligence,” University of California, Berkeley professor Stuart Russell asked, “What's so bad about the better AI? AI that is incredibly good at achieving something other than what we really want.”

      Russell then offered some approaches to managing the it's‐smarter‐than‐we‐are conundrum. He described AIs that are not in control of anything in the world, but only answer a human's questions, making us wonder whether it could learn to manipulate the human. He suggested creating an agent whose only job is to review other AIs to see if they are potentially dangerous and admitted that was a bit of a paradox. He's very optimistic, however, given the economic incentive for humans to create AI systems that do not run amok and turn people into paperclips. The result will inevitably be the development of community standards and a global regulatory framework.

      Setting aside science fiction fears of the unknown and a madman with a suitcase nuke, there are some issues that are real and deserve our attention.

       Unintended Consequences

      The biggest legitimate concern facing marketing executives when it comes to machine learning and AI is when the machine does what you tell it to do rather than what you wanted it to do. This is much like the paperclip problem, but much more subtle. In broad terms, this is known as the alignment problem. The alignment problem wonders how to explain to an AI system goals that are not absolute, but take all of human values into consideration, especially considering that values vary widely from human to human, even in the same community. And even then, humans, according to Professor Russell, are irrational, inconsistent, and weak‐willed.

      The good news is that addressing this issue is actively happening at the industrial level. “OpenAI is a non‐profit artificial intelligence research company. Our mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.”20

      The other good news is that addressing this issue is actively happening at the academic/scientific level. The Future of Humanity Institute teamed with Google to publish a paper titled “Safely Interruptible Agents.”21

      Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real‐time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions – harmful either for the agent or for the environment – and lead the agent into a safer situation. However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button – which is an undesirable outcome. This paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off‐policy learning property to prove that either some agents are already safely interruptible, like Q‐learning, or can easily be made so, like Sarsa. We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible.

      There is also the Partnership on Artificial Intelligence to Benefit People and Society,22 which was “established to study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

      Granted, one of its main goals from an industrial perspective is to calm the fears of the masses, but it also intends to “support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.”

      The Partnership on AI's stated tenets23 include:

      We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.

      We will work to maximize the benefits and address the potential challenges of AI technologies, by:

      Working to protect the privacy and security of individuals.

      Striving to understand and respect the interests of all parties that may be impacted by AI advances.

      Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.

      Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.

      Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.

      That's somewhat comforting, but the blood pressure lowers considerably when we notice that the Partnership includes the American Civil Liberties Union. That makes it a little more socially reliable than the Self‐Driving Coalition for Safer Streets, which is made up of Ford, Google, Lyft, Uber, and Volvo without any representation from little old ladies who are just trying to get to the other side.

       Will a Robot Take Your Job?

      Just as automation and robotics have displaced myriad laborers and word processing has done

Скачать книгу


<p>17</p>

“Benefits & Risks of Artificial Intelligence,” http://futureoflife.org/background/benefits‐risks‐of‐artificial‐intelligence/.

<p>18</p>

“The AI Revolution: Our Immortality or Extinction,” http://waitbutwhy.com/2015/01/artificial‐intelligence‐revolution‐2.html.

<p>19</p>

“Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” http://futureoflife.org/open‐letter‐autonomous‐weapons.

<p>20</p>

https://openai.com/about.

<p>21</p>

“Safely Interruptible Agents,” http://intelligence.org/files/Interruptibility.pdf.

<p>22</p>

Partnership on Artificial Intelligence to Benefit People and Society, https://www.partnershiponai.org/.

<p>23</p>

The Partnership on AI's stated tenets, https://www.partnershiponai.org/tenets.