Cyberphysical Smart Cities Infrastructures. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Cyberphysical Smart Cities Infrastructures - Группа авторов страница 8

Cyberphysical Smart Cities Infrastructures - Группа авторов

Скачать книгу

standards set by consensus of a large group should include ethical implications and machine learning (ML) code, which powers AI, and should incorporate these ethics. While Bryson and Winfield discuss the importance of these ethical standards, they fail to discuss what these ethics should be, leaving it open to interpretation. In this chapter, this gap will be examined in effort to try to establish some status quo.

      Continuing with exploring the ethical dilemma posed by AI technology, in February 2019, the AMA Journal of Ethics published an article entitled “Ethical Dimensions of Using Artificial Intelligence in Health Care” [2]. In this article, the role that AI plays in healthcare was explored, as well as the ethical implications. The main focus of this article was to find balance between the benefits of AI technology and the inherent risks associated with it.

      In the official document, the National Artificial Intelligence Research and Development Strategic Plan [4], the future of AI is laid out. Over the course of eight strategies, the National Science and Technology Council outlines important steps needed that are priorities for Federal investment. “The Federal Government must therefore continually reevaluate its priorities for AI R&D investments to ensure that investments continue to advance the cutting edge of the field and are not unnecessarily duplicative of industry investments” [4]. Of the eight strategies, seven are continued over from the 2016 report. Due to the fact that these seven aspects are not new, the focus will be on the eight, and only new one. The eighth strategy focuses on the partnership between the federal government and academia, industry, and others involved in the research and development of advancement in AI to continue to generate breakthroughs. Although not as new, it does also address ethics and AI that will be used as that topic is explored.

      In his article, “Hacking AI: Rethinking cybersecurity for artificial intelligence” [5], Davey Gibian explores how traditional cybersecurity is insufficient for evolving AI technologies. He also states that what is needed for cybersecurity is “two algorithm‐level considerations: robustness and explainability” [5]. One interesting point that he goes on to make under “robustness” is talking about eliminating bias as part of AI cybersecurity. In this report, we will examine how this bias can be caused by ethics implemented into AI.

Schematic illustration of intersection of AI, healthcare, and cybersecurity.

      Although AI development began in the 1950s, 70 years later, we still do not have fully functional robots walking around. The reason for this has to do with the fact that early AI innovators were limited in technological capability. Things such as processor speed, memory, space, cost, and availability were all issues that these early innovators had to overcome. As developments for the computer became faster, smaller, and cheaper, AI was able to move forward, jumping over hurdles that previously blocked its path. In the beginning of AI programs, many would often be developed to play a board game, such as chess or checkers. These games were limited as far as the rules, and thus easy to learn, but take a significant amount of intelligence to learn and master [7]. By the 1960s, AI had been established and many were working toward defining this new frontier.

      Thanks largely in part to Hollywood and science fiction, AI is synonymous in people's minds with walking, talking robots. However, AI expands beyond robotics into machine learning and natural language processing, all of which can find applications in the healthcare field [2]. Care robots or “carebots” do exist, but they are far from the androids that appear in Westworld. There are several schools of thought of how to classify AI in healthcare. One perspective views AI being put into three categories: diagnosis, clinical decision making, and personalized machines [2]. In comparison, another school of thought believes there are two main categories with subcategories in each. In this viewpoint, the main categories are virtual and physical [3].

      Before being able to define AI's role, it is important to understand the capabilities of AI. Using ML, AI is able to process large amounts of data and look for patterns, including patterns that are often missed or overlooked by humans. For many, this pattern identification is used as a secondary consult to confirm a doctor's diagnosis [2]. There is an inherent trust in these AI and ML by healthcare professionals (HCPs). HCPs are often overworked and understaffed, and so using the AI to confirm diagnosis means that they are assuming the following: (i) the machines were coded correctly and tested so that they will identify the patterns correctly and perform as expected, (ii) those who performed the coding have at least some understanding of the healthcare, and (iii) the machines have not been tampered with. Later in this chapter, the third point of trust will be addressed, whether this trust is wrongfully placed or not. This chapter does not investigate the manufacturing of these machines, so for the intent of this topic, it will be assumed that the first point of trust is valid. Looking at the second point of trust, though, does bring up an obvious issue. For many, practicing medicine is a lifelong journey of continual education, and as things are now, most doctors do not know how to code at the level of creating AI. This means that HCPs and programmers must work together to identify trends, patterns, and known symptomatic association that the AI would use. This limitation is likely why AI is kept in the passenger seat and used as a secondary diagnosis, rather than replacing HCPs as the primary diagnostician.

Скачать книгу