Leadership by Algorithm. David De Cremer
Чтение книги онлайн.
Читать онлайн книгу Leadership by Algorithm - David De Cremer страница 8
This led to his 1950 article, ‘Computing Machinery and Intelligence,’ in which he introduced the now-famous Alan Turing test, which is today still considered the crucial test to determine whether a machine is truly intelligent. In the test, a human interacts with another human and a machine. The participant cannot see the other human or the machine and can only use information on how the other unseen party behaves. If the human is not able to distinguish between the behavior of another human and the behavior of a machine, it follows that we can call the machine intelligent. It is these behavioral ideas of Alan Turing that are today still significantly influencing the development of learning algorithms.
The fact that observable behaviors form the input to learning is not a surprise as in the time of Turing behavioral science was dominating. This stream within psychology refrained from looking inside the mind of humans. The mind was considered the black box of humans (interestingly enough the same is being said of AI nowadays), as it was not directly observable. For that reason, scientists back then suggested that the mind should not be studied. Only behaviors could be considered the true indicators of what humans felt and thought.
To illustrate the dominance of this way of thinking, consider the following joke: Two behaviorists walk into a bar. One says to the other: “You’re fine. How am I?” In a similar vein, today we assume that algorithms can learn by analysing data in ways that identify observable patterns. And those patterns teach algorithms the rules of the game. Based on these rules they make inferences and construct models that guide predictions. Thus, in a way, we could say that algorithms decide and advise strategies based on the patterns observed in data. These patterns inform the algorithm what the common behavior is (the rule of the context of the data) and subsequently the algorithm adjusts to it.
Algorithms thus act in line with the data holding observable patterns with which they are being fed. These observable patterns (which reflect the behaviors Turing referred to), however, do not lead algorithms to learn what lies behind these patterns. Or, in other words, they do not allow algorithms to understand the feelings and deeper level of thinking, reflection and pondering that hide beneath the observable behaviors. This reality means that algorithms can perfectly imitate (hence, the title of the movie) and pretend to be human, but can they really be human in being able to function in relationships in the manner of leaders? Can algorithms, which supposedly display human (learned) behaviors, really survive and function in human social relationships?
Consider the following example. Google Duplex recently demonstrated AI having a flawless conversation over the phone when making an appointment for a dinner.35 The restaurant owner did not have a clue he was talking to AI making the reservation. But imagine what would happen if unexpected events occurred during such a conversation? (Note, the mere fact that you are able to imagine such a scenario makes you already different from the algorithm who would never consider this scenario.) What if the restaurant owner suddenly had a change of heart and told AI that he does not want to work that evening, despite the fact that it is mentioned online that the restaurant will be open that same evening? Will AI be able to take perspective and give a reasonable (human) response?
In all honesty, this may be less likely. It is one thing for an algorithm to know the behaviors that humans usually show and based on those observations develop a behavioral repertoire to deal with most situations. It is, however, another thing to understand the meaning behind human behaviors and respond to it in an equally meaningful way. And here lies the potential limitation for the algorithm as a leader. At this moment, an algorithm cannot understand the meaning of behavior in a given context. AI learns and operates in a context-free way, whereas humans have the ability to account for the situation when behaviors are shown – and, importantly, we expect this skill from leaders. It is as Melanie Mitchell noted in her book Artificial Intelligence: A Guide for Thinking Humans: “Even today’s most capable AI systems have crucial limitations. They are good only at narrowly defined tasks and utterly clueless about the world beyond.”
As a side note, this logic of meaning and taking perspective is something that unfortunately seems to be forgotten by those saying that we have replaced Descartes’s body and mind, making humans less needed. Yes, Descartes identified the two separate entities of body and mind, but he also noted that they are connected. We still use this assumption today when we say a healthy mind makes for a healthy body. But what makes for the connection? What is the glue that holds mind and body so closely aligned? In philosophical terms we may say it is the soul. The soul that gives us passion, emotions and a sense of intuitive interpretation with respect to the things we see, do and decide. As such, we may be able to replace the body and the mind, but do the ones replacing us also have the soul to make the total entity work? If body and mind cannot connect, then leadership without heart is the consequence.
And, think about it, would you then simply comply and follow orders from an intelligent machine leader? Those who are big fans of the Star Trek movies will know the character Data. Data is a humanoid robot who is trying to learn how to understand human emotion. In one episode, Data has to take over the command of the Starship USS Enterprise. This experience turned out to be a useful lesson for both the robot and the human crew for how important human emotions are to leadership.
Today, we have arrived in an era where this scenario may not be science fiction for too much longer. But with such futuristic views on leadership in sight, we also need to understand the kind of society and organizations we would like to see. How do we want to lead them? We need to come up with an answer to what leadership means to us and who should take up the leadership position, including assessing our own strengths and weaknesses.
1 Reeves, M. (2015). ‘Algorithms Can Make Your Organization Self-Tuning.’ Harvard Business Review. May 13. Retrieved from: https://hbr.org/2015/05/algorithms-can-make-your-organization-self-tuning
2 Andrews, L. (2019). ‘Public administration, public leadership and the construction of public value in the age of algorithm and big data.’ Public Administration, 97(2), 296-310.
3 Fountaine, T., McCarthy, B., & Saleh, T. (2019). ‘Building the AI-powered Organization.’ Harvard Business Review, July-August, 2-13.
4 Lehnis, M. (2018). ‘Can we trust AI if we don't know how it works?’ Retrieved from https://www.bbc.com/news/business-44466213
5 Accenture (2017). ‘AI as the new UI – Accenture Tech Vision.’ Retrieved from: https://www.accenture.com/t20171005T065832Z__w__/us-en/_acnmedia/Accenture/next-gen-4/tech-vision-2017/pdf/Accenture-TV17-Trend-1.pdf
6 Accenture (2018). ‘Realizing the full value of AI.’ Retrieved from: https://www.accenture.com/_acnmedia/pdf-77/accenture-workforce-banking-survey-report