Autonomous Vehicles. Clifford Winston
Чтение книги онлайн.
Читать онлайн книгу Autonomous Vehicles - Clifford Winston страница 4
This is not to suggest that all the engineering challenges to achieving level-5 operations have been solved—or are even close to being solved. Nor does it imply that the benefits from autonomous vehicles discussed in this book could be achieved only with level-5 autonomous vehicles. Level-4 autonomous vehicles (which are self-driving but operate only under well-specified conditions, such as certain road types or geographic areas) could also provide significant benefits. However, because we take a long-run view of autonomous-vehicle development, testing, and adoption, our main focus here is on level-5 vehicles.
In theory, a vehicle operating at level 5 is likely to draw on a combination of technologies to drive itself, as illustrated in figure 2-1. Sensors on board the vehicle use radio waves (radar), light waves (light detection and ranging, or LIDAR), and photography to measure the distance of the car from various objects, such as pedestrians, bicyclists, and other cars. An onboard computer processes this and other information noted below in real time and executes plans to proceed safely toward the vehicle’s destination. Figure 2-2 illustrates what the car sees so it can operate in traffic. The global positioning system (GPS), supplemented with highly detailed digital maps, locates the vehicle that has the right-of-way. Communications between vehicles (V2V) and between vehicles and roadway infrastructure (V2I) help inform cars of the location and intentions of other vehicles as well as the condition of the roadway and the status of traffic signals.
Improving the Technology and Overcoming Challenges
Automakers, technology companies, and research universities are continuing to explore ways that autonomous vehicles could be improved to operate safely in all driving conditions and in response to all behaviors they are likely to encounter, including individuals driving nonautonomous vehicles, pedestrians, and bicyclists. According to the Department of Transportation secretary Elaine Chao, more than 1,400 self-driving cars, trucks, and other vehicles are currently in testing by more than eighty companies across thirty-six states and the District of Columbia.1 Generally, the industry has evolved to combine a simulation approach and a vehicle-miles-driven approach to testing and improving vehicles, which allows for more testing in a wider variety of driving environments.
For example, Waymo, Alphabet’s self-driving-car unit, is using simulation to teach its autonomous vehicles how to respond to a situation that they have not encountered before. Once a car actually drives and redrives that specific situation and its many variations, the skill is added to its knowledge base and shared with Waymo’s network of self-driving cars. Researchers are also teaching self-driving cars to recognize and predict pedestrian movements with great precision by creating a “biomechanically inspired recurrent neural network” that catalogs human movements.2 With this capability, the cars can predict poses and future locations for one or several pedestrians up to fifty yards from the vehicle.
Figure 2-1
How a Car Drives Itself
Note: Car is a Lexus model modified by Google. SOURCE: GOOGLE, GUILBERT GATES/THE NEW YORK TIMES
Some companies, such as Aeva, are developing next-generation LIDAR, which can more accurately measure a car’s distance from surrounding objects (pedestrians, cyclists, and other vehicles) and the velocity of those objects and predict the future motion of those objects with less prediction error. Luminar is developing a LIDAR technology that can detect whether a pedestrian, for example, is on his or her phone and not paying attention to roadway conditions; this facility provides an additional visual cue that an autonomous vehicle could use to make decisions, such as whether to slow down. New artificial-intelligence cameras enable autonomous vehicles to recognize images much faster and to make quicker decisions, and the next generation of LIDAR sensors is being collapsed to a single chip, which will greatly reduce its cost by facilitating mass production and by reducing moving parts that may break. Finally, Nvidia Corporation has built a powerful new computer, code-named Pegasus, capable of quickly processing information on a nonautonomous vehicle’s surrounding environment, enabling the vehicle to operate safely as a fully autonomous vehicle.
Figure 2-2
What the Car Sees
Note: The car sensors gather data on nearby objects, such as their size and rate of speed, if any. The sensors categorize the objects—as cyclists, pedestrians, or other cars and objects—based on how they behave and transmit signals as to how to respond. SOURCE: GOOGLE, GUILBERT GATES/THE NEW YORK TIMES
Certain industry participants are taking steps to address LIDAR’s shortcomings in specific environments and situations. For example, LIDAR does not detect black cars as well as it sees vehicles of other colors. PPG Industries has therefore developed a paint that allows the near-infrared light emitted by lasers to pass through a dark car’s exterior and to rebound off a reflective undercoat, making it visible to sensors. The company is also developing other coatings to improve sensors’ abilities when their performance is reduced by dirt and ice. In addition, LIDAR has difficulties measuring distances between objects in whiteout conditions. An autonomous car developed in Finland, Martti, is using a newly developed radar system to enable it to drive safely on snow-covered roads, and WaveSense, a Boston-area start-up, has developed a ground-penetrating radar system to keep autonomous vehicles on the road regardless of the weather. Finally, the MIT Media Lab is developing a new imaging system that can gauge the distance of objects obscured by thick fog.
Autonomous vehicles currently rely on either highly detailed 3D maps that tell the system what to expect or well-marked lanes that they can navigate in an urban or highway environment. But many roads are not paved with lane markings or have not been 3D-mapped in detail. Mack (2018) reports that MIT has begun to address this limitation by developing MapLite, which combines GPS, using only the most basic topographical maps from OpenStreetMap, with LIDAR and IMU (inertial measurement unit) sensors that monitor road conditions. In addition, Mississippi State University’s Center for Advanced Vehicular Systems has developed a simulator to collect data to help autonomous vehicles recognize realistic off-road landscapes; it is also developing a test track for off-road vehicle testing.3
Keeping abreast of the latest driving challenges that industry participants have identified and their latest technological approaches for overcoming them can be a challenge. Although the industry has certainly not resolved all of the problems facing autonomous vehicles, neither has it exhausted all technological solutions; in fact, it continues to explore new ones, as the following examples illustrate. Until now, four kinds of sensors—video cameras, radar, ultrasonic sensors, and LIDAR—have been used to enable autonomous vehicles to perceive the objects around them so that they are sufficiently safe. But deficiencies still exist in the sensor suite, such as distance limitations and reduced perception in heavy rain (Quain 2019). LIDAR companies are working on higher-wavelength models that could provide longer-range, highway-speed systems that see through rain and snow. Radar companies are also working on improvements, such as 4D imaging radar that can create detailed images at distances of more than 900 feet. A possible solution that goes beyond those improvements may be far-infrared cameras (thermal cameras), which detect wavelengths below the visible spectrum that indicate heat. Companies have been developing infrared cameras for various military applications and rescue operations, and some have recently put infrared sensors on