Leadership by Algorithm. David De Cremer

Чтение книги онлайн.

Читать онлайн книгу Leadership by Algorithm - David De Cremer страница 6

Leadership by Algorithm - David De Cremer

Скачать книгу

administrative tasks.10,11,12

      Automation and innovation

      Automation and the corresponding use of algorithms with deep learning abilities are also penetrating other industries. The legal sector is another area where many discussions are taking place about how and whether to automate services. Legal counsellors have started to use automated advisors to contest relatively small fines such as parking tickets.

      The legal sector is also considering the use of AI to help judges go through evidence collected to reach a verdict in court cases. Here, algorithms are expected to help present evidence needed to make decisions where the interests of different stakeholders are involved. The fact that decisions, including the interests of different stakeholders, may become automated should make us aware that automation in the legal sector introduces risks and challenges. Indeed, such use of algorithms may put autonomous learning machines well on the way to influencing fair decisions within the framework of the law. Needless to say, if questions about human rights and duties gradually become automated, we will enter a potentially risky era where human values and priorities could become challenged.

      It is not only that banks have embraced technology so much that it has transformed the workings of their industry significantly. No, it is also the other way around. Technology companies are now moving into the financial industry. Indeed, tech companies are becoming banks. Take recent examples such as Alibaba (BABA), Facebook (FB), and Amazon (AMZN); all are moving into providing financial services and products.

      Us versus them?

      Putting all these developments together makes it clear that the basic cognitive skills and physical abilities that humans have always brought to the table are about to become a thing of the past. These abilities are vulnerable to becoming automated and optimized further by fast-processing, learning machines. It is this vision – widely advocated in the popular press – that makes many of us wonder where the limits of automation lie; if there are any. After all, if even the skills and abilities that are essential to what makes us human seem ready to be replaced by AI, and this new technology is able to engage in deep learning and thus continuously improve, what will be left for humans in the future?

      This reflection is not a new one. In fact, it has been around for quite some time. Indeed, in 1965 British mathematician I.J. Good wrote, “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” In all fairness, such speculation introduces several existential questions. And, it is those kinds of questions that make people very nervous today about the future of humanity in an ecosystem where technology that may overtake us has arrived. In fact, it introduces us to a potential conflict of interest that will make it hard for us to choose.

      On one hand, we are clearly obsessed with the power of AI to bring many benefits to our organizations and society. On the other hand, however, this obsession also creates a moment of reflection that worries us. A reflection that confronts us with the realization that human limits can be solved by technology; ultimately, this means that applying technology may render humans obsolete. In our pursuit for more profit and growth, and a desire to increase efficiency, we may be confronted with a sense of disappointment about what it actually means to be human.

      This kind of reflective and critical thinking about humanity makes clear that although we fear being replaced, we do look at humans and machines as two different entities. We make a big distinction between humans as us and machines as them. Because of this sentiment, it is clear that the idea of we (humans and machines together) may be difficult to accept. So, if this is the case, how on earth can we talk about a partnership between humans and machines? If we think we are so different that becoming one is impossible, coexistence will be the best situation possible. But even coexistence is feared by many, because this may still lead to humans being replaced by the superior machine.

      All these concerns point out that we consider humans as actors that are limited in their abilities, whereas we regard machines as entities that can develop and reach heights that ultimately humans will be unable to reach. But, is this a valid assumption? What does science say? Much of the research out there seems to provide evidence that this view may indeed be valid. Studies do suggest that if we look at how people judge the potential of new technology, approach its functionality and predict how to use it in the future, the conclusion seems to be that humans fear being outperformed. Why does science suggest such a conclusion?

Скачать книгу