IT Cloud. Eugeny Shtoltc

Чтение книги онлайн.

Читать онлайн книгу IT Cloud - Eugeny Shtoltc страница 6

IT Cloud - Eugeny Shtoltc

Скачать книгу

style="font-size:15px;">      essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_2 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

      babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb

      essh @ kubernetes-master: ~ / consul $ docker exec -t dev-consul consul members

      Node Address Status Type Build Protocol DC Segment

      53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1 <all>

      8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1 <all>

      babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1 <all>

      essh @ kubernetes-master: ~ / consul $ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

      true

      essh @ kubernetes-master: ~ / consul $ curl $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / v1 / kv / group1 / key1

      [

      {

      "LockIndex": 0,

      "Key": "group1 / key1",

      "Flags": 0,

      "Value": "dmFsdWUx",

      "CreateIndex": 277,

      "ModifyIndex": 277

      }

      ]

      essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

      With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

      dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

      * –cluster-store – you can get data about keys

      * –cluster-advertise – can be saved

      docker network create –driver overlay –subnet 192.168.10.0/24 demo-network

      docker network ls

      Simple clustering

      In this article, we will not consider how to create a cluster manually, but will use two tools: Docker Swarm and Google Kubernetes – the most popular and most common solutions. Docker Swarm is simpler, it is part of Docker and therefore has the largest audience (subjectively), and Kubernetes provides much more capabilities, more tool integrations (for example, distributed storage for Volume), support in popular clouds, and more easily scalable for large projects (large abstraction, component approach).

      Let's consider what a cluster is and what good it will bring us. A cluster is a distributed structure that abstracts independent servers into one logical entity and automates work on:

      * In the event of a server crash, containers are dropped (new ones created) to other servers;

      * even distribution of containers across servers for fault tolerance;

      * creating a container on a server suitable for free resources;

      * Expanding the container in case of failure;

      * unified management interface from one point;

      * performing operations taking into account the parameters of servers, for example, the size and type of disk and the characteristics of containers specified by the administrator, for example, associated containers with a single mount point are placed on this server;

      * unification of different servers, for example, on different OS, cloud and non-cloud.

      We will now move from looking at Docker Swarm to Kubernetes. Both of these systems are orchestration systems, both work with Docker containers (Kubernetes also supports RKT and Containerd), but the interactions between containers are fundamentally different due to the additional Kubernetes abstraction layer – POD. Both Docker Swarm and Kubernetes manage containers based on IP addresses and distribute them to nodes, inside which everything works through localhost, proxied by a bridge, but unlike Docker Swarm, which works for the user with physical containers, Kubernetes for the user works with logical – POD. A logical Kubernetes container consists of physical containers, the networking between which occurs through their ports, so they are not duplicated.

      Both orchestration systems use an Overlay Network between host nodes to emulate the presence of managed units in a single local network space. This type of network is a logical type that uses ordinary TCP / IP networks for transport and is designed to emulate the presence of cluster nodes in a single network to manage the cluster and exchange information between its nodes, while at the TCP / IP level they cannot be connected. The fact is that when a developer develops a cluster, he can describe a network for only one node, and when a cluster is deployed, several of its instances are created, and their number can change dynamically, and in one network there cannot be three nodes with one IP address and subnets (for example, 10.0.0.1), and it is wrong to require the developer to specify IP addresses, since it is not known which addresses are free and how many will be required. This network takes over tracking the real IP addresses of nodes, which can be allocated randomly from the free ones and change as the nodes in the cluster are re-created, and provides the ability to access them via container IDs / PODs. With this approach, the user refers to specific entities, rather than the dynamics of changing IP addresses. Interaction is carried out using a balancer, which is not logically allocated for Docker Swarm, but in Kubernetes it is created by a separate entity to select a specific implementation, like other services. Such a balancer must be present in every cluster and, but within the Kubernetes ecosystem, is called a Service. It can be declared either separately as a Service or in a description with a cluster, for example, as a Deployment. The service can be accessed by its IP address (see its description) or by its name, which is registered as a first-level domain in the built-in DNS server, for example, if the name of the service specified in the my_service metadata , then the cluster can be accessed through it like this: curl my_service; … This is a fairly standard solution, when the components of the system, along with their IP addresses, change over time (re-created, new ones are added, old ones are deleted) – send traffic through a proxy server, IP or DNS addresses for the external network remain constant, while internal ones can change, leaving taking care of their approval on the proxy server.

      Both orchestration systems use the Ingress overlay network to provide access to themselves from the external network through the balancer, which matches the internal network with the external one based on the Linux kernel IP address mapping tables (iptalbes), separating them and allowing information to be exchanged even if there are identical IP addresses in internal and external network. And, here, to maintain the connection between these potentially conflicting networks at the IP level, an overlay Ingress network is used. Kubernetes provides the ability to create a logical entity – an Ingress controller, which will allow you to configure the LoadBalancer or NodePort service depending on the traffic content at a level above HTTP, for example, routing based on address paths (application router) or encrypting TSL / HTTPS traffic, like GCP does and AWS.

      Kubernetes is the result of evolution through internal Google projects through Borg, then through Omega, based on the experience gained from experiments, a fairly scalable architecture has developed. Let's highlight the main types of

Скачать книгу