Societal Responsibility of Artificial Intelligence. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Societal Responsibility of Artificial Intelligence - Группа авторов страница 6
Consequently, this technoscientific context is conducive to the development of an increasingly important international cultural and intellectual movement, namely transhumanism, whose objective is to improve the physical and mental characteristics of human beings by relying on biotechnologies and other emerging technologies. This current of thought considers that certain states of the human condition such as illness, disability, pain, aging and death are not fatal in themselves and can be corrected or even eliminated.
Thus, technological revolutions have enabled a change of scale in the exploitation of digital data, particularly in the field of genetics. They can be produced in large quantities, in an increasingly precise manner and preserved over an indefinite period of time. It can be observed that advances in computer science have made it possible, through the creation of specific programs, for databases to be interoperable, thus allowing for the fusion of data from various and multiple sources. To this, we can add the development of new ways of accessing data, in particular through the multiplication of data sources of all kinds. Crowdsourcing1 is becoming one of the new devices allowing easy access, in real time, to digital data in order to develop research (Khare et al. 2015).
ALGORITHMIC PROCESSING.–
Algorithmic processing is a finite and unambiguous sequence of operations or instructions to solve a problem or obtain a result. Algorithms are found today in many applications such as computer operation, cryptography, information routing, resource planning, and optimal use of resources, image processing, word processing and so on. An algorithm is a general method for solving a set of problems. It is said to be correct when, for each instance of the problem, it ends up producing the correct output, i.e. it solves the problem.
BIG DATA.–
Big Data, or megadata, sometimes referred to as massive data, refers to data sets that become so large that they are difficult to make use of with traditional database or information management tools. The term Big Data refers to a new discipline at the crossroads of several sectors such as technology, statistics, databases and business (marketing, finance, health, human resources, etc.). This phenomenon can be defined according to seven characteristics, the 7Vs (volume, variety, velocity, veracity, visualization, variability, value).
BLOCKCHAIN.–
Computer “block chain” is protected against any modification, each of which contains the identifier of its predecessor. The blockchain records a set of data such as a date, a cryptographic signature associated with the sender and a whole set of other specific elements. All these exchanges can be traced, consulted and downloaded free of charge on the Internet, by anyone who wishes to check the validity and non-falsification of the database in real time. The major advantage of this device is the ability to store a proof of information with each transaction in order to be able to prove later and at any moment the existence and content of this original information at a given moment. Its mission is, therefore, to create trust by protocolizing a digital asset or database by making it auditable.
CROWDSOURCING.–
A practice that corresponds to appealing to the general public or consumers to propose and create elements of the marketing policy (brand choice, slogan creation, video creation, product ideation/co-creation, etc.) or even to carry out marketing services. Within the framework of crowdsourcing, professional or amateur service providers can then be rewarded, remunerated or sometimes only valued when their creations are chosen by the advertiser or sometimes simply for their participation effort. Crowdsourcing has especially developed with the Internet, which favors the soliciting consumers or freelancers through specialized platforms.
AI appears as an essential evolution in the processing of digital information. It represents the part of computing dedicated to the automation of intelligent behaviors. This approach is the search for ways to endow computer systems with intellectual capacities comparable to those of human beings. AI must be capable of learning, adapting and changing its behavior.
The idea of elaborating autonomous machines probably dates back to Greek antiquity with the automatons built by Hephaestus, reported notably in the Iliad (Marcinkowski and Wilgaux 2004). For Brian Krzanich, President and CEO of Intel (the world’s leading microprocessor manufacturer), AI is not only the next tidal wave in computing, but also the next major turning point in the history of humankind. It does not represent a classic computer program: it is more educated than programmed. It is clear that the AI lawsuit has mixed fantasy, science fiction and long-term futurology, forgetting even the basic definitions of the latter.
Thus, the concept of AI2 is to develop computer programs capable of performing tasks performed by humans requiring learning, memory organization, and reasoning. The objective is to give notions of rationality, reasoning and perception (e.g. visual) functions to control a robot in an unknown environment. Its popularity is associated with new techniques, such as deep learning, which gives a program the possibility to learn how to represent the world because of a network of virtual neurons that perform each of the elementary calculations, in a similar way to our brain.
DEEP LEARNING.–
This algorithmic system has been used for more than 20 years for different actions in the form of neural networks, in particular to “learn”. A neuron represents a simple function that takes different inputs and calculates its result, which it sends to different outputs. These neurons are mainly structured and organized in layers. The first layer uses almost raw data and the last layer will generate a result. The more layers there are, the greater the learning and performance capacity will be. One can take the example of character recognition from handwriting. The first layer will take into account all the pixels that make up a written character – for example, a letter or a number – and each neuron will have a few pixels to analyze. The last layer will indicate “it’s a T with a probability of 0.8” or “it’s an I with a probability of 0.3”. A backpropagation operation is performed from the final result to remodify the parameters of each neuron.
The machine is programmed to “learn to learn”. AI does not exist to replace people, but to complement, assist, optimize and extend human capabilities. There are two types of AI:
– weak AI: its objective is to rid people of tedious tasks, using a computer program reproducing a specific behavior. This AI is fast to program, very powerful, but without any possibility of evolution. It is the current AI;
– strong AI: its objective is to build increasingly autonomous systems, or algorithms capable of solving problems. It is the most similar approach to human behavior. This AI learns or adapts very easily. Thanks to algorithmic feedback loops, the machine can modify its internal parameters used to manage the representation of each stratum from the representation of the previous stratum. These strata of functionalities are learned by the machine itself and not by humans. From this postulate, we can say that the machine becomes autonomous and intelligent, by constructing its own “computerization” structures and relying on axiomatic decisions. It is the future AI that should be developed in about 10 years.
WEAK