Cyber Security and Network Security. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Cyber Security and Network Security - Группа авторов страница 15
Figure 1.12 Group access control modifier by manager.
1.5 Performance Analysis
1.5.1 Load Balancer
Load balancers are used to provide low latency to users especially at high usage periods and distribute the loads across multiple compute instances. Combining it with the autoscaling groups also enables our architecture to scale up and scale down as per requirements. In the absence of load balancers, higher traffic on instances would lead to higher packet loss and higher latency due to overloaded traffic, on a fixed number of compute instances. It would also lead to unbalanced load on web servers resulting in crashing of the servers. Due to the load-balanced architecture, a user may face a latency slightly above 1–5 ms, due to the load balancer operating at layer 7 and also since the instances are provisioned on demand automatically so it saves costs to a great extent.
1.5.2 Lambda (For Compression of Data)
In the above screenshots, it is shown using Lambda, a 3-MB .jpg file is compressed to 9.8 KB in a couple of seconds. In our proposed model, if a user accounts for the data transfer of 51.2 MB per day, then at an average, about 1,000 users are responsible for uploading about 50 GB of data per day. That number turns to 1,500 GB of object-based data saved by the end of a month. Figure 1.13 shows before compression of data (raw data), and Figure 1.14 shows after compression of data (compressed data). Due to the huge amount of data that needs to be handled, it is highly efficient to compress the data, keeping it intact to reduce costs on long term storages and cost incurred on egress traffic. By the method implemented in our proposed model, the 50 GB of data per day can be compressed into as less as 125 MB per day. It is almost a 500% decrease in the data size, saving costs to a great extent, making our proposed model much more economical.
Figure 1.13 Before compression of data (raw data).
Figure 1.14 After compression of data (compressed data).
1.5.3 Availability Zone
The high latency is faced by the user if the user tries to access the application far away from an availability zone. However, having multiple availability zones, the users get the choice of using the closest zone from their location, as the compute instances can be cloned in multiple availability zones. In this way, the latency is 1–2 ms maximum, highly decreased.
1.5.4 Data in Transit (Encryption)
Encrypt and decrypt data blocks of varying sizes from 0.5 to 20 MB). Experiments performed on ECB and CBC modes. In ECB mode, DES algorithm takes 4 s for a 20-MB data block. In CBC mode, the time taken by the DES algorithm on a 20-MB data block is slightly more than ECB. Due to its key-chaining nature, more processing time is required. The average difference between CBC and EBC is 0.059896 s.
On both ECB and CBC modes, AES algorithm takes approximately 16 s for the same 20-MB block for encryption and decryption.
1.5.5 Data in Rest (Encryption)
AES encryption algorithm provides security and speed. It is the most efficient symmetric algorithm because AES-256 bit encryption can produce 2256 keys. To understand this thing, 232 is about 4.3 billion and it exponentially grows after that. We can assume this to be 250 keys per second (approximately one quadrillion keys/second a very plentiful assumption). One year is equal to 31,557,600 s (approximately). That means one billion supercomputers are required to check about 275 keys per year, while the age of the universe is 234 years only which is less than .01% of the entire key possible. Thus, it is practically not possible to figure out the AES-256 bit key. In addition, apart from that we are also using SHA-512 for extra protection of data. In real world, imagining a CPU like “Intel Xeon L5630” has four core, each core an process 610 MB/s of AES-256 bit data, i.e., around 2,440 MB/s which is enough for encryption and decryption data of 10 gigabit. The SHA-512 bit in “Intel Core 2” of 1.83 GHz process under Windows Vista in 32 bit node takes 99 MiB/s and 17.7 cycles per byte. Thus, the cloud does not get overwhelmed for encryption of huge data.
1.6 Future Research Direction
In our proposed model, data is encrypted as a result no one can tamper with our data; hence, we need to think about the metadata of the encrypted data which is being sent from client/user to server (cloud). Here, we can introduce some error detection schema like checksum. Checksum is a numerical value which is based on the bytes in encrypted data. The sender client will calculate the value of data and send the value with the data. The cloud server will use the same algorithm to check whether the numerical value is the same or not. If such a concept can be introduced in future, then we need not to depend on VPC. Apart from that, machine learning algorithms as well to verify the authenticity of the user can be used.
Blowfish is a symmetric encryption algorithm. During encryption and decryption, data blocks or cipher blocks are divided into fixed lengths. We can think of Blowfish for our future research because it provided remarkable results on ECB and CBC modes having the lowest latency among AES and DES with just around 3 s in both the modes, whereas AES had a whooping latency of around 16 s. One noticeable thing is that Blowfish has a 448 long key yet it outperformed other encryption algorithms [15].
We can think of reducing the Lambda Execution time by putting the dependency files in a separate library so deployment packages can be unpacked faster. Distribution of files according to deployment packages and function’s code class files in a separate file reduces Lambda Execution time than keeping everything in a single large file with too many class files having function’s code as Lambda has to execute the full large file then.
1.7 Conclusion
In our proposed model, we have suggested a system where data entry, data modification, and data management all are done through a client side application through which the data can be encrypted and sent over to the server side application. From the data encryption at transit to the data encryption at rest, everything is managed by the application itself. Here, in the suggested system design, we have implemented our concept in the form of an enterprise application to be used for communication between multiple levels of users. We have implemented a role-based access control/identity-based access control concept, depending on which different authorizations are allotted, which can be customized by higher levels of roles. Comparing with the existing systems, our system design is robust, scalable, and durable. The load is balanced between multiple availability zones, read-replicas are deployed, and autoscaling groups are