Societal Responsibility of Artificial Intelligence. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Societal Responsibility of Artificial Intelligence - Группа авторов страница 7

Societal Responsibility of Artificial Intelligence - Группа авторов

Скачать книгу

or narrow AI simulates specific cognitive abilities such as natural language comprehension, speech recognition or driving. It only performs tasks for which it is programmed. It is therefore highly specialized. It is a machine for which the physical world is somewhat enigmatic, even ghostly, if it perceives it at all. It does not even have any awareness of time. This AI is unintelligent and works only on the basis of scenarios pre-established by designers and developers.

      STRONG AI.–

      From a general point of view, AI can be illustrated as an algorithmic matrix that aims to “justly or coldly” optimize decisions. Naturally, the morality or fairness of this judgment is not predefined, but depends, on the one hand, on the way in which the rules are learned (the objective criterion that has been chosen), and, on the other hand, on the way in which the learning sample has been constructed. The choice of the mathematical rules used to create the model is crucial. Just like the human functioning that analyzes a situation before changing one’s behavior, AI allows the machine to learn from its own results to modify its programming. This technology already exists in many applications like on our smartphones, and should soon be extended to all areas of daily life: from medicine to the autonomous car, through artistic creation, mass distribution, or the fight against crime and terrorism. Machine learning not only offers the opportunity to automatically make use of large amounts of data and identify habits in consumer behavior. Now, we can also actuate these data.

      MACHINE LEARNING.–

      Machine learning concerns the design, analysis, development and implementation of methods that allow a machine (in the broadest sense) to evolve through a systematic process, and, thus, perform tasks that are difficult or impossible to perform by more traditional algorithmic means. The algorithms used allow, to a certain extent, a computer-controlled (possibly a robot) or computer-assisted system to adapt its analyses and response behaviors based on the analysis of empirical data from a database or sensors.

      In our view, adopting the machine learning method is no longer just a utility, but rather a necessity. Thus, in light of the digital transition and this “war of intelligences” (Alexandre 2017), companies will be the target of a major transformation and will invest in AI applications in order to:

       – increase human expertise via virtual assistance programs;

       – optimize certain products and services;

       – bring new perspectives in R&D through the evolution of self-learning systems.

      Thus, even if today, ethical recommendations have little impact on the functional scope of AI and introduce an additional level of complexity in the design of self-learning systems, it becomes essential, in the future, to design and integrate ethical criteria around digital projects related to AI.

      To this can be added documents focusing on ethical principles related to AI, such as:

       – the Asilomar AI Principles, developed at the Future of Life Institute, in collaboration with attendees of the high-level Asilomar conference of January 2017 (hereafter “Asilomar” refers to Asilomar AI Principles, 2017);

       – the ethical principles proposed in the Declaration on Artificial Intelligence, Robotics and Autonomous Systems, published by the European Group on Ethics in Science and New Technologies of the European Commission, in March 2018;

       – the principles set out by the High-Level Expert Group on AI, via a report entitled “Ethics Guidelines for Trustworthy AI”, for the European Commission, December 18, 2018;

       – the Montreal Declaration for AI, developed at the University of Montreal, following the Forum on the Socially Responsible Development of AI of November 2017 (hereafter “Montreal” refers to Montreal Declaration, 2017);

       – best practices in AI of the Partnership on AI, the multi-stakeholder organization – composed of academics, researchers, civil society organizations, companies building and utilizing AI academics, researchers, civil society organizations and companies building and utilizing AI – that, in 2018, studied and formulated best practices in AI technologies. The objective was to improve public understanding of AI and to serve as an open platform for discussion and engagement on AI and its influences on individuals and society;

       – the “five fundamental principles for an AI code”, proposed in paragraph 417 of the UK House of Lords Artificial Intelligence Committee’s report, “AI in the UK: Ready, Willing and Able”, published in April 2018 (hereafter “AIUK” refers to House of Lords, 2018);

       – the ethical charter drawn up by the European Commission for the Efficiency of Justice (CEPEJ) on the use of AI in judicial systems and their environment. It is the first European text setting out ethical principles relating to the use of AI in judicial systems (see Appendix 1);

       – the ethical principles of Luciano Floridi et al. in their article entitled “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, December 2018;

       – the OPECST (Office parlementaire d’évaluation des choix scientifiques et technologiques) report (De Ganay and Gillot 2017);

       – the six practical recommendations of the report of the CNIL (Commission nationale de l’information et des libertés)4 on the ethical issues of algorithms and AI, drafted in 2017 (see Appendix 2);

       – the report published by the French member of parliament Cédric Villani (2018) on AI;

       – the Declaration on Ethics and Data Protection in the Artificial Intelligence Sector, at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), Tuesday, October 23, 2018, in Brussels;

       – the seven guidelines5 developed by the European High Level Expert Group on AI, published on April 8, 2019 by the European Commission;

Скачать книгу