Cloud Computing Solutions. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Cloud Computing Solutions - Группа авторов страница 15
The sequential development of the computing environment may be arranged in a year wise manner as shown in Figure 1.1. IBM System/360 entered the global market in 1964. This model and other items of the same family attracted attention from the business community because the fringe parts were movable and the item unit was implemented in all systems of the family [1]. The scaling down of the mainframe systems and more improvements over time prompted the free machines, the reported minicomputers; for example, DEC’s PDP-8 minicomputer introduced in 1964 and Xerox’s Alto in 1974 [2].
The computer era began in the early 1970s with the release of the first Intel 4004 microprocessor (MP) in 1971, followed by the release of the Intel 8008 MP in 1972. The first personal home computer, the Micral, was created by André Truong Trong Thi [2] based on the Intel 8008 MP. Development of the Mark-8 or TV-Typewriter was the first project for microcomputer hobbyists. In 1975, the MITS Altair 8800 microcomputer kit advertised in several scientific and hobby magazines is credited with having popularized microcomputers. This personal computer was supposed to be the underlying idea behind home computers. The first programming language for the machine was Microsoft’s founding product, Altair BASIC. Successively, Apple, Commodore, Atari and others entered the personal home computer market. IBM introduced its first personal computer to the market, which was commonly known as the IBM PC. Microsoft engineered the operating system (OS) for IBM PCs, which was built up and standardized and wound up being used by numerous PC makers. There have been numerous consecutive periods of improvement with headway being made in the advancements whipping up the market. With the creation of graphical user interface (GUI), the next stage of improvements is being prompted.
While thinking about how to significantly improve interactions among numerous personal computers, another point of reference began in the business sector, which was the Internet. The Advanced Research Projects Agency (ARPA)1 presented the idea of the Internet as an exploratory venture. Each interfacing point is known as a node in a web. With the help of the U.S. Department of Homeland Security, a correspondence framework has been made with the end goal that in case any of the nodes get broken, the correspondence framework remains connected. In the long run, from this endeavor, the ARPANET was created and nearly 200 foundations were connected to this system. The thought of TCP/IP in 1983 has been exhibited, and the internet was changed to TCP/IP, which relates the entire subnet to the ARPANET. By and by the internet has become known as a system of systems. With the advancement of the World Wide Web (WWW) by British expert and personal computer researcher Sir Timothy John Berners-Lee in 1989, the web accomplished its definitive leap forward. Berners-Lee proposed an information administration framework for CERN (European Organization for Nuclear Research)2 where hyperlinks were utilized. In the end, with respect to the end clients there was a necessity for web programs. Thus, the WWW became ubiquitous when the web browser Mosaic was introduced in the market.
Figure 1.1: Evolution of cloud computing.
Today, the entire information technology sector is putting effort into outlining the quality of web programming by increasing the bandwidth and also by using some innovative ideas to build up programs. We can easily develop user interactive websites with the use of Java, PHP or AJAX. These advancements result in the development of different multimedia websites and interactive applications for the business sector.
Meanwhile, in the 1990s, the idea of grid computing was presented in academia. Carl Kesselman and Ian Foster disseminated their book The Grid: Blueprint for a New Computing Infrastructure. The new linkage was related to the idea of an electric grid. We can relate the idea of grid computing with our day-to-day example. When a device is connected to a power outlet, we are unaware of how electric power is generated and reaches the outlet, we just use it. This is what is known as virtualization. Here, we don’t know the basic architecture or method behind the scene. We don’t know how things are made available to the users, but we are aware that they are actively using it. We can predict that power is virtualized; virtualization conceals a gigantic scattering grid and power generation stations. This idea may be adapted in computing, where distinctive conveyed segments, such as storage, data management, and software assets, are incorporated [3]. In innovations like cluster, grid and now cloud computing, every one of the developments have focused on enabling access to an enormous amount of computing assets in a totally virtualized design. It influences a singular framework for examining social occasions involving resources in a total example. These are all given to the users or the organizations or to customers on the basis of “pay-per-use” or “pay-as-you-go” design (payment based on utilization).
In 1997, the term “cloud computing” was introduced in academia by Ramnath Chellappa, who defined it as a “computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone.” In 1999, Salesforce started conveying applications to their clients utilizing basic websites. The actual applications were undertaken and dispersed over the web; in this manner, utility-based computing began being used in the real world. Amazon started its point of reference by creating Amazon Web Services (AWS) and conveying storage services, estimations and so on in 2002. Amazon allows clients to integrate its immense online substance with their own website. Its web services and computing facility have expanded slowly upon request. In 2006, Amazon initially launched its Elastic Compute Cloud (Amazon EC2)3 as a commercial internet benefit that allows small enterprises and individuals to lease infrastructure (resources, storage, memory) upon which they can carry and run their own applications. With the implementation of Amazon storage (Amazon S3), a “pay-per-use” model was also implemented. Cloud’s Google App Engine,4 Force.com, Eucalyptus,5 Windows Azure,6 Aneka7 and a lot more of their kind are capturing the cloud business.
The next section is about cluster, grid and mobile computing.
1.2 Cluster Computing
A computer cluster can be characterized as an arrangement of several coupled computers cooperating in such a way that every machine can be viewed as a single system image (SSI). Computer clusters are developed by merging a colossal number of computer developments, including access to fast networks, low-cost MPs, and software that delivers high-performance computing.
According to Sadashiv and Kumar [4], a cluster can be defined as the collection of distributed or parallel computers attached among themselves with the help of high-speed networks such as SCI, Myrinet, Gigabit Ethernet and InfiniBand. They function collectively in the execution of data- and compute-intensive tasks that would not be feasible for a single computer to execute alone. The clusters are mostly used for load balancing (to distribute the task over the different interconnected computers), high availability of the required data, and for compute purpose. The interconnected computers are used due to their high availability as they maintain the redundant nodes which are being utilized to convey the required service when the system components fail.
The performance of the system is upgraded enough and enhanced in that case because regardless of whether one node neglects to figure out the task, there is another backup node which will be ready to convey the task and takes on the simple single purpose without any snags [5]. At the point when numerous computers are connected in a computer cluster, they can easily share computational workload as a single virtual computer. From the client’s perspective, they are numerous machines, yet they are working as a single virtual machine. The client’s demand is received and appropriated among all the independent computers