Position, Navigation, and Timing Technologies in the 21st Century. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов страница 50
![Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов Position, Navigation, and Timing Technologies in the 21st Century - Группа авторов](/cover_pre850884.jpg)
The LearnLoc framework [148] fuses Wi‐Fi fingerprinting with dead reckoning to create a low‐cost, infrastructure‐less indoor navigation solution. The framework adapted and enhanced three machine learning techniques that took inputs from inertial sensors and Wi‐Fi fingerprinting to make predictions about indoor location on a map in the presence of noise (e.g. due to incorrect sensor readings). The three supervised learning algorithms used to assist with indoor localization were based on KNN, linear regression (LR), and nonlinear regression with neural networks (NL‐NN). Regression‐based variants of these algorithms were used instead of the more traditional classification‐based variants. This is because a classification technique requires dividing the entire indoor map area into a fine‐grained grid for classification toward accurate localization, which creates a prohibitively large input space that is impractical to process on resource‐constrained mobile devices. For instance, their efforts to implement SurroundSense [149] that proposes an SVM‐based classification technique for indoor localization for real‐time localization on a smartphone were not successful because of the large memory footprint and slow performance (taking close to a minute for each prediction) with the approach. In contrast, regression can allow fast predictions with much lower resource demands, which is what is needed for real‐time indoor localization with mobile devices. Figure 37.8(a) shows a detailed look at the predicted paths by the KNN‐based LearnLoc variant for different Wi‐Fi scan intervals. Not surprisingly, the lowest Wi‐Fi scan interval (1 s) results in the highest accuracy, but also incurs a very high energy consumption overhead because scanning is performed very frequently (as can be seen by the high density of green dots that represent Wi‐Fi scan instances in Figure 37.8(a) for the 1 s interval case). As the Wi‐Fi scan interval increases, the paths traced start deviating notably from the actual path, and the estimation errors increase. A scan interval of 4 s was chosen for all three LearnLoc variants to balance energy consumption on a smartphone with localization accuracy. Figure 37.8(b) summarizes the paths traced by the three LearnLoc variants and the Footpath [150] inertial navigation (Inertial_Nav) technique. It can be observed that the path traced by the Inertial_Nav technique greatly deviates from the actual path due to error accumulation over time. The sequence alignment algorithm in the Inertial_Nav technique aims to overcome this error with periodic recalibration, but is not always successful in doing so. For the LearnLoc variants, the green points in the figure indicate instances where a Wi‐Fi scan was performed. The KNN variant performs best, with an average error of 2.23 m. The accuracy can be improved much further if scan intervals smaller than 4 s are chosen. LearnLoc is one of the very few techniques to explore trade‐offs between energy consumption and accuracy during indoor localization, and also consider realistic resource constraints when devising algorithms meant for execution on resource‐constrained mobile devices. A more recent work, CNNLoc [151], improves upon LearnLoc by using a more sophisticated convolutional neural network (CNN) machine learning algorithm deployed on smartphones.
An indoor localization system was proposed in [152] that does not depend on a centrally established database of signals, nor on a pre‐supplied building map. It combines inertial sensor data (from the accelerometer and compass), as well as RSSI measurements from Wi‐Fi and GSM cellular radios. It divides the building area into a regular grid and applies a SLAM technique to correct any observed drift. Apple’s WiFiSLAM system [153] also utilizes the above combination of signals and sensors for indoor localization. SignalSLAM [154] extends these efforts by combining readings from many more sources: time‐stamped Wi‐Fi and Bluetooth RSS, 4G LTE Reference Signal Received Power (RSRP), magnetic field magnitude, near‐field communication (NFC) readings at specific landmarks, and dead reckoning based on inertial data. The location of a mobile subject is resolved by using a modified version of GraphSLAM optimization [155] of the user’s poses with a collection of absolute location and pairwise constraints that incorporate multi‐modal signal similarity.
Figure 37.8 (a) Paths traced for various Wi‐Fi scan intervals for LearnLoc using K‐nearest neighbor (KNN) along the Clark L2 South path; green dots represent an instance of a Wi‐Fi scan along the path; (b) paths traced by indoor localization techniques along the Clark L2 North building benchmark path [148].
Source: Reproduced with permission of IEEE.
37.5.6.3 Techniques Fusing RF Signals with Other Signals
Many techniques propose to combine RF signal data with readings from other sources beyond inertial sensors. SurroundSense [149] utilizes fingerprints of a location based on RF (GSM, Wi‐Fi) signals as well as ambient sound, light, color, and the layout‐induced user movement (detected by an accelerometer). Cameras, microphones, and accelerometers on a Wi‐Fi‐enabled Nokia N95 phone were used to sense the fingerprint information. The sensed values are recorded, pre‐processed, and transmitted to a remote SurroundSense server. The goal of pre‐processing on the phone is to reduce the data volume that needs to be transmitted. Once the sensor values arrive at the server, they are separated by the type of sensor data (sound, color, light, Wi‐Fi, accelerometer) and distributed to different fingerprinting modules. These modules perform a set of appropriate operations, including color clustering, light extraction, and feature selection. The individual fingerprints from each module are logically inserted into a common data structure, called the ambience fingerprint, which is forwarded to a fingerprint matching module for localization. Support vector machines (SVMs), color clustering, and other simple methods were used for location classification.
The Acoustic Location Processing System (ALPS) [156] combines BLE transmitters with ultrasound signals to improve localization accuracy and also help users configure indoor localization systems with minimal effort. ALPS consists of time‐synchronized beacons that transmit ultrasonic chirps that are inaudible to humans, but are still detectable by most modern smartphones. The phone uses the TDoA of chirps to measure distances. ALPS uses BLE on each node to send relevant timing information, allowing for the entire ultrasonic bandwidth to be used exclusively for ranging. The platform requires a user to place three or more beacons in an environment and then walk through a calibration sequence with a mobile device where they touch key points in the environment (e.g. the floor and the corners of the room). This process automatically computes the room geometry as well as the precise