Leadership by Algorithm. David De Cremer
Чтение книги онлайн.
Читать онлайн книгу Leadership by Algorithm - David De Cremer страница 6
But companies are not just investing in complex algorithms for passive administrative tasks that can lead to hiring the best employees. They are also being used already for more active approaches. For example, the bank JPMorgan Chase uses algorithms to track employees and assess whether or not they act in line with the company’s compliance regulations.13 Organizations thus see the benefit of algorithms in the daily activities of their employees.
As another case in point, companies have set out to enable algorithms to track how satisfied employees feel, in order to predict the probability of them resigning. For any organization this type of data is important and useful in promoting effective management. After all, once the right kind of people are working in the organization, you want to do all you can to keep them. In that respect, an interesting study from the US National Bureau of Economic Research demonstrated that low-skill service-sector workers (where retention rates are low) stayed in the job 15% longer when an algorithm was used to judge their employability.14
Automation and innovation
Automation and the corresponding use of algorithms with deep learning abilities are also penetrating other industries. The legal sector is another area where many discussions are taking place about how and whether to automate services. Legal counsellors have started to use automated advisors to contest relatively small fines such as parking tickets.
The legal sector is also considering the use of AI to help judges go through evidence collected to reach a verdict in court cases. Here, algorithms are expected to help present evidence needed to make decisions where the interests of different stakeholders are involved. The fact that decisions, including the interests of different stakeholders, may become automated should make us aware that automation in the legal sector introduces risks and challenges. Indeed, such use of algorithms may put autonomous learning machines well on the way to influencing fair decisions within the framework of the law. Needless to say, if questions about human rights and duties gradually become automated, we will enter a potentially risky era where human values and priorities could become challenged.
Another important industry where technology and the use of automated learning machines are quickly becoming part of the ecosystem is financial services. Traders and those running financial and risk management are working in an environment where digital adoption and machine learning are no longer the exception.15 Rather, in today’s financial industry, they seem to have become the default. In fact, the use and application of algorithms to, for example, manage risk analysis or provide personalized products based on the profile of the customer is unparalleled. It has reached the level where we can confidently say that banks today are technology companies first, and financial institutes second. It’s no surprise that the financial industry is forecast to spend nearly $300bn in 2021 on IT, up from about $260bn just three years earlier.16
It is not only that banks have embraced technology so much that it has transformed the workings of their industry significantly. No, it is also the other way around. Technology companies are now moving into the financial industry. Indeed, tech companies are becoming banks. Take recent examples such as Alibaba (BABA), Facebook (FB), and Amazon (AMZN); all are moving into providing financial services and products.
A final important area where we see that the use of autonomous learning algorithms will make a big difference is healthcare.17 The keeping and administration of medical files is increasingly being automated to provide an interconnected and fast delivery of information to doctors.18 Transforming the healthcare industry will also impact medical research, hence better results can be achieved in saving human lives.19 Doctors making use of technology to detect disease and subsequently propose treatment will become more accurate and truly evidence-based. For example, examining how to increase cancer detection in the images of lymph node cells research showed that an AI-exclusive approach had a 7.5% error rate and a human one a 3.5% error rate. The combined approach, however, revealed an error rate of only 0.5% (85% reduction in error).20
Us versus them?
Putting all these developments together makes it clear that the basic cognitive skills and physical abilities that humans have always brought to the table are about to become a thing of the past. These abilities are vulnerable to becoming automated and optimized further by fast-processing, learning machines. It is this vision – widely advocated in the popular press – that makes many of us wonder where the limits of automation lie; if there are any. After all, if even the skills and abilities that are essential to what makes us human seem ready to be replaced by AI, and this new technology is able to engage in deep learning and thus continuously improve, what will be left for humans in the future?
This reflection is not a new one. In fact, it has been around for quite some time. Indeed, in 1965 British mathematician I.J. Good wrote, “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” In all fairness, such speculation introduces several existential questions. And, it is those kinds of questions that make people very nervous today about the future of humanity in an ecosystem where technology that may overtake us has arrived. In fact, it introduces us to a potential conflict of interest that will make it hard for us to choose.
On one hand, we are clearly obsessed with the power of AI to bring many benefits to our organizations and society. On the other hand, however, this obsession also creates a moment of reflection that worries us. A reflection that confronts us with the realization that human limits can be solved by technology; ultimately, this means that applying technology may render humans obsolete. In our pursuit for more profit and growth, and a desire to increase efficiency, we may be confronted with a sense of disappointment about what it actually means to be human.
This kind of reflective and critical thinking about humanity makes clear that although we fear being replaced, we do look at humans and machines as two different entities. We make a big distinction between humans as us and machines as them. Because of this sentiment, it is clear that the idea of we (humans and machines together) may be difficult to accept. So, if this is the case, how on earth can we talk about a partnership between humans and machines? If we think we are so different that becoming one is impossible, coexistence will be the best situation possible. But even coexistence is feared by many, because this may still lead to humans being replaced by the superior machine.
All these concerns point out that we consider humans as actors that are limited in their abilities, whereas we regard machines as entities that can develop and reach heights that ultimately humans will be unable to reach. But, is this a valid assumption? What does science say? Much of the research out there seems to provide evidence that this view may indeed be valid. Studies do suggest that if we look at how people judge the potential of new technology, approach its functionality and predict how to use it in the future, the conclusion seems to be that humans fear being outperformed. Why does science suggest such a conclusion?
Since the 1970s, scholars have been providing evidence that human experts do not perform as well as simple linear models in things like clinical diagnosis, forecasting graduate students’ success, and other prediction tasks.21,22 Findings like this have led to the idea that algorithmic judgment is superior to expert human judgment.23 For example, research has shown that algorithms deliver more accurate medical diagnoses when detecting heart-rate diseases.24,25,26
Furthermore, in the world of business, algorithms prove better at predicting employee performance, the products customers want to buy, and identifying fake news and information.27,28 An overall analysis of all these effects (what is called a meta-analysis) even reveals that algorithms outperform human forecasters by 10% on average.29 Overall, the evidence suggests that it is (and will increasingly be) the case that algorithms outperform humans.
This scientific evidence, combined with our tendency to think of humans and machines as us versus them, poses the question of whether AI will replace people’s jobs at center-stage.30 This question is no longer a peripheral one. It dominates many discussions