IT Cloud. Eugeny Shtoltc
Чтение книги онлайн.
Читать онлайн книгу IT Cloud - Eugeny Shtoltc страница 5
docker build.
fi
docker run -d –name myapp -p 80:80 myimage bash
… It is clear that you need to general parameters, the name of the image, the container to be displayed in variables, to check that the Dockerfile is there, it is valid, and only after that delete the container and much more. To understand the real scale, without going into the interaction of containers, about cloning (scaling) these groups and the like, but I will just mention that the Docker run command can exceed one to two dozen lines. For example, a dozen of forwarded ports, mountable folders, memory and processor limits, connections with other containers, and a few more specific parameters. Yes, this is not good, but it is difficult to divide into many containers in this version, due to the lack of a container interaction map. But the question arises: Isn't there a lot to do to just provide the user with the opportunity to start the container or rebuild? Often, the answer of the system administrator boils down to the fact that only a select few can be given access. But even here there is a solution: Docker-compose is a tool for working with a group of containers:
# docker-compose
version: v1
services:
myapp:
container-name: myapp
images: myimages
ports:
– 80:80
build:.
… For start docker-compose up -d , and for bulkhead docker down; docker up -d . Moreover, when changing the configuration, when a complete bulkhead is not needed, it will simply be updated.
Now that we simplify the process of managing a single container, let's work with a group. But here, for us, only the config itself will change:
# docker-compose
version: v1
services:
mysql:
images: mysql
Nginx:
images: nginx
ports:
– 80:80
myapp:
container-name: myapp
build:.
depence-on: mysql
images: myimages
link:
– db: mysql
– Nginx: Nginx
… Here we see the whole picture as a whole, the containers are connected by one network, where the application can access mysql and NGINX via the db and NGINX hosts, respectively, the myapp container will be created only when after raising the mysql database, even if it takes some time.
Service Discovery
With the growth of the cluster, the probability of nodes falling increases and manual detection of what has happened becomes more complicated; Service Discovery systems are designed to automate the detection of newly appeared services and their disappearance. But in order for the cluster to be able to detect the state, given that the system is decentralized – the nodes must be able to exchange messages with each other and choose a leader, examples are Consul, ETCD and ZooKeeper. We will consider Consul based on its following features: the whole program is one file, it is extremely easy to use and configure, has a high-level interface (ZooKeeper does not have it, it is believed that over time, third-party applications that implement it should appear), is written in a non-demanding language to computing machine resources (Consul – Go, ZooKeeper – Java) and neglected its support in other systems, such as, for example, ClickHouse (supports ZooKeeper by default).
Let's check the distribution of information between the nodes using a distributed key-value storage, that is, if we added records to one node, then they should spread to other nodes, and it should not have a hard-coded Master node. Since Consul consists of one executable file, download it from the official website at the link https://www.consul.io/downloads. html on each node:
wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip
unzip consul.zip
rm -f consul.zip
Now you need to start one node, for now, as master consul -server -ui , and others as slave consul -server -ui and consul -server -ui . After that, we will stop Consul, which is in master mode, and launch it as an equal, as a result of Consul – they will re-elect the temporary leader, and in case of a yoke of failure, they will re-elect again. Let's check the work of our cluster consul members :
consul members;
And so let's check the distribution of information in our storage:
curl -X PUT -d 'value1' .....: 8500 / v1 / kv / group1 / key1
curl -s .....: 8500 / v1 / kv / group1 / key1
curl -s .....: 8500 / v1 / kv / group1 / key1
curl -s .....: 8500 / v1 / kv / group1 / key1
Let's set up service monitoring, for more details see the documentation https://www.consul.io/docs/agent/options. html #telemetry, for that .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b
In order not to configure, we will use the container and mode for development with the already configured IP address at 172.17.0.2:
essh @ kubernetes-master: ~ $ mkdir consul && cd $ _
essh @ kubernetes-master: ~ / consul $ docker run -d –name = dev-consul -e CONSUL_BIND_INTERFACE = eth0 consul
Unable to find image 'consul: latest' locally
latest: Pulling from library / consul
e7c96db7181b: Pull complete
3404d2df15cb: Pull complete
1b2797650ac6: Pull complete
42eaf145982e: Pull complete
cef844389e8c: Pull complete
bc7449359c58: Pull complete
Digest: sha256: 94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181
Status: Downloaded newer image for consul: latest
c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1
essh @ kubernetes-master: ~ / consul $ docker inspect dev-consul | jq '. [] | .NetworkSettings.Networks.bridge.IPAddress'
"172.17.0.4"
essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_1 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev