Leadership by Algorithm. David De Cremer

Чтение книги онлайн.

Читать онлайн книгу Leadership by Algorithm - David De Cremer страница 4

Leadership by Algorithm - David De Cremer

Скачать книгу

a sense, the rapid development of AI and its many applications gives us a peek into a future where our society will function in a completely different way. With the arrival of AI, we can already see a future in place that forces all of us to act now. AI is the kind of technology innovation that is so disruptive that if you do not start changing your ways of working today, there may not even be a future for you tomorrow.

      While this may come across as somewhat threatening, it is a future that we have to be serious about. If Moore’s law – the idea that the overall processing power of computers will double every two years – is applicable, then in the next decade we should be ready to witness dramatic changes in how we live and work together. All of this buzz has made me – just as when I met the very ambitious executive – curious about a technology-driven future. For me, AI is acting as a time machine, helping us to see what could be, but at a moment in time that we actually still have to build it. And, this is an interesting thought.

      Why?

      Well, if we consider AI as a kind of time machine, giving us a peek into the future, we should use it to our benefit. Use it in a way that can help us to be conscious and careful about how we design, develop and apply AI. Because once the future sets in, the past may be remembered, but it will be gone.

      Today, we still live in a time where we can have an impact on technology. Why am I saying this? Let me respond to this question by referring to a series on Netflix that I very much enjoyed watching. The series is called Timeless and describes the adventures of a team that wants to stop a mysterious organization, called Rittenhouse, from changing history by making use of a time machine.

      In the first episode, the relevance to our discussion in this book is obvious right away. There, one of the main characters, Lucy Preston, a history professor, is introduced to Connor Mason, who is the inventor of a time machine. Mason explains that certain individuals have taken control of a time machine, called the Lifeboat, and gone back in time. With a certain weight in his voice, he makes clear that “history will change”. Everyone in the room is aware of the magnitude of his words and realizes the consequences that this will have on the world, society and maybe even their own lives.

      Lucy Preston responds emotionally by asking why he would be so stupid as to invent something so dangerous. Why invent technology that could hurt the human race in such significant ways (i.e. changing its own history)? The answer from Mason is as clear as it is simple: he didn’t count on this happening. And, isn’t this how it usually goes with significant technological innovations? Blinded by the endless opportunities, we don’t want to waste any time and only look at what technology may be capable of. The consequences of an unchecked technology revolution for humanity are usually not addressed.

      Can we expect the same thing with AI? Are we fully aware of the implications for humanity if society becomes smart and automated? Are we focusing too much on developing a human-like intelligence that can surpass real human intelligence in both specific and general ways? And, are we doing so without fully considering the development and application dangers of AI?

      As with every significant change, there are pros and cons. Not too long ago, I attended a debate where the prospects of a smart society were discussed. Initially the focus was entirely on the cost recommendations and efficiencies that AI applications would bring. Everyone was happy so far.

      At one point in the debate, however, someone in the audience asked whether we shouldn’t evaluate AI more critically in terms of its functionality for us as human beings, rather than on maximizing the abilities of the technology itself. One speaker responded loudly with the comment that AI should definitely tackle humanity’s problems (e.g. climate change, population size, food scarcity and so forth), but its development should not be slowed down by anticipatory thoughts on how it would impact humanity itself. As you can imagine, the debate became suddenly much more heated. Two camps formed relatively quickly. One camp advocated a focus on a race to the bottom to maximize AI abilities as fast as possible (and thus discounting long-term consequences for humanity), whereas the other camp advocated the necessity of social responsibility in favor of maximizing technology employment.

      Who is right? In my view, both perspectives make sense. On the one hand, we do want to have the best technology and maximize its effectiveness. On the other hand, we also want to ensure that the technology being developed will serve humanity in its existence, rather than potentially undermining it.

      So, how to solve this dilemma?

      In this book, I want to delve deeper into this question and see how it may impact the way we run our teams, institutes and organizations, and what the choices will be that we have to make. It is my belief that in order to address the question of how to proceed in the development and application of algorithms in our daily activities, we need to agree on the purpose of the technology development itself. What purpose does AI serve for humanity and how will this impact the shaping of it? This kind of exercise is necessary to avoid two possible outcomes that I have been thinking about for years.

      First, we do not want to run the risk that the rapid development of AI technologies creates a future where our human identity is slowly removed and a humane society becomes something of the past. Like Connor Mason’s time machine that altered human history, mindless development of AI technology, with little awareness of its consequences for humanity, may run the same risks.

      Second, we push the limits of technology advancement with the aim for AI to augment our abilities and thus to serve the development of a more (and not less) humane society. From that point of view, the development of AI should not be seen as a way to solve the mess we create today, but rather as a means of creating opportunities that will improve the human condition. As the executive I met as a young scholar proclaimed that technology is developed to deal with the problems that we create, AI technology developed with the sole aim of maximizing efficiency and minimizing errors will reduce the human presence rather than augment its ability.

      Putting these two possible outcomes together made me realize that the purpose served by investing so much in AI technology advancement should not be to make our society less humane and more efficient in eliminating mistakes and failures. This would result in humankind having to remove itself from its place in the world to be replaced by another type of intelligence not burdened by human flaws. If this were to happen, our organizations and society would ultimately be run by technology. What will our place in society be then?

      In this book, I will address these questions by unravelling the complex relationship that exists between on the one hand our human desire to constantly evolve, and the drive for fairness and co-operation on the other hand. Humans have an innate motivation to go where no man has gone before. The risk associated with this motivation is that at some point we may lose control of the technology we are building and the consequence will be that we will submit to it.

      Will this ever be a reality? Humans as subordinates of the almighty machine? Some signs indicate that it may well happen. Take the example of the South Korean Lee Sedol, who was the world champion at the ancient Chinese board game Go. This board game is highly complex and was considered for a long time beyond the reach of machines. All that changed in 2016 when the computer program AlphaGO beat Lee Sedol four matches to one. The loss against AI made him doubt his own (human) qualities so much that he decided to retire in 2019. So, if even the world champion admits defeat, why would we not expect that one day machines will develop to the point where they run our organizations?

      To tackle this question, I will start from the premise that the leadership we need in a humane society is likely not to emerge through more sophisticated technology. Rather, enlightened leadership will emerge by becoming more sophisticated about human nature and our own unique abilities to design better technology that is used in wise (and not smart) ways.

      Let me take you on a journey, where we will look at what exactly is happening today with AI in our organizations;

Скачать книгу