Deep Learning Approaches to Cloud Security. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Deep Learning Approaches to Cloud Security - Группа авторов страница 10

Deep Learning Approaches to Cloud Security - Группа авторов

Скачать книгу

have distinctive iris designs. The irises of two eyes of an individual are also unique. Iris recognition is an automated method of identifying unique intricate patterns of an individual’s iris using mathematical pattern-recognition techniques.

       WORKING OF IRIS READERS

      1 1. Scan an individual’s eyes with subtle infrared illumination to obtain detailed patterns of iris.

      2 2. Isolate iris pattern from the rest of the picture, analyze, and put in a system of coordinates.

      3 3. Coordinates are removed using computerized data and in this way an iris mark is produced.

      Even on disclosure, one cannot restore or reproduce such encrypted iris signatures. Now the user just needs to look at the infrared camera for verification. Iris acknowledgment results in faster coordination and is extremely resistant to false matches.

      1.2.3 Facial Recognition

      A non-intrusive technique to capture physical traits without contact and cooperation from people discovers its application in the acknowledgment framework. Every face can be illustrated as a linear combination of singular vectors of sets of faces. Thus, Principal Component Analysis (PCA) can be used for its implementation. The Eigen Face Approach in PCA can be utilized as it limits the dimensionality of a data set, consequently upgrading computational productivity [6].

      Facial recognition technology identifies up to 80 factors on a human face to identify unique features. These factors are endpoints that can measure variables of a person’s face, such as the length or width of the nose, the distance between the eyes, the depth of the eye sockets and the shape and size of the mouth. In order to measure such detailed factors, complexities such as aging faces arise. To solve this, computers have learned to look closely at the features that remain relatively unchanged no matter how old we get. The framework works by capturing information for nodal points on a computerized picture of a person’s face and storing the subsequent information as a face print [7]. Face print is like a fingerprint but for your face. It accurately identifies the minute differences even in identical twins. It creates 3D models of your face and analyses data from different angles, overcoming many complexities associated with facial recognition technology. The face print is then utilized as a reason for correlation with information captured from faces in a picture or video.

      1.2.4 Voice Recognition

      Voice Recognition is a mechanized technique for recognizing or affirming the identity of an individual on the basis of voice. Voice Biometrics make a voiceprint for every individual, which is a numerical representation of the vocal tract of a speaker [8]. This is to ensure correct identification regardless of the language spoken, contents of speech, and wellbeing of an individual.

       WORKINGS OF VOICE RECOGNITION:

      1 1. Create a voice print or “template” of a person’s speech.

      2 2. Only when a user opts in or enlists himself, a template is created, encrypted, and stored for future voice verification.

      3 3. Ordinarily, the enlistment process is passive, which means a template can be created in the background during a client’s ordinary cooperation with an application or operator.

      The utilization of voice biometrics for identification is expanding in fame because of enhancements in precision, energized to a great extent by evolution of AI, and heightened customer expectations for easy and fast access to information [9].

      Large amounts of data can diminish the efficiency of data mining and may not provide significant inputs to the model. Non-essential attributes add noise to the data, expanding the size of the model. Moreover, model building and scoring leads to consumption of time and system resources, influencing model precision.

      Likewise, huge data sets may contain groups of attributes that are associated and may quantify a similar hidden component which can skew the logic of the algorithm. The computation cost associated with algorithmic processing increases with higher dimensionality of processing space, posing challenges for data mining algorithms. Impacts of noise, correlation, and high dimensionality can be minimized by dimension reduction using feature selection and feature extraction [10].

      1.3.1 Feature Selection

      Feature Selection selects the most relevant attributes by identifying the most pertinent characteristics and eliminating redundant information. A small size feature vector is used to reduce computational complexity, basic for online individual acknowledgment. Determination of effective features also helps increase precision [11]. Traditionally, large dimensions of feature vectors can be reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).

      The importance of a feature set S, for class c, is characterized by the normal estimation of all common data values between the individual feature fi and class c as follows:

image

      The repetition of all features in set S is the normal estimation of all common data values between feature fi and feature fj:

image

      1.3.2 Feature Extraction

      Feature extraction helps in data visualization by reducing a complex data set to 2 or 3 dimensions. It can improve the speed and efficiency of supervised learning. Feature extraction can also be used to enhance the speed and effectiveness of supervised learning. It has applications in data compression, data decomposition and projection, latent semantic analysis, and pattern recognition.

      The information is extended onto the largest Eigen Vectors in order to reduce the dimensionality.

      Let V = matrix with columns having the largest Eigen Vectors and

      D = original data with columns of different observations.

      Then, the projected data D′ is derived as D′ = VT D.

      In the event when just N Eigen Vectors are kept and e1...eN represents the related Eigen Values, the amount of variance left after projecting the original d-dimensional data can be determined as:

image

      1.3.3 Face Marking

      Facial land marking

Скачать книгу