IT Cloud. Eugeny Shtoltc

Чтение книги онлайн.

Читать онлайн книгу IT Cloud - Eugeny Shtoltc страница 21

IT Cloud - Eugeny Shtoltc

Скачать книгу

= bitrix_12345;

      NAME_CLUSTER = bitrix;

      gcloud projects create $ NAME_CLUSTER –name $ NAME_CLUSTER;

      gcloud config set project $ NAME_CLUSTER;

      gcloud projects list;

      A few subtleties: the –zone key is required and put at the end, the disk should not be less than 10Gb, and the type of machines can be taken from https://cloud.google.com/compute/docs/machine-types. If we have only one replica, then by default a minimal configuration for testing is created:

      gcloud container clusters create $ NAME_CLUSTER –zone europe-north1-a

      You can see it in the admin panel by expanding the drop-down list in the header and opening the All projects tab.

      gcloud projects delete NAME_PROJECT;

      , if more – standard, the parameters of which we will edit:

      $ gcloud container clusters create mycluster \

      –-machine-type = n1-standard-1 –disk-size = 10GB –image-type ubuntu \

      –-scopes compute-rw, gke-default \

      –-machine-type = custom-1-1024 \

      –-cluster-version = 1.11 –enable-autoupgrade \

      –-num-nodes = 1 –enable-autoscaling –min-nodes = 1 –max-nodes = 2 \

      –-zone europe-north1-a

      The –enable-autorepair key starts the work of monitoring the availability of the node, and if it crashes, it will be recreated. The key requires a Kubernetes version of at least 1.11, and at the time of this writing, the default version is 1.10 and therefore you need to set it with a key, for example, –cluster-version = 1.11.4-gke.12 . But you can fix only the major version –cluster-version = 1.11 and set the auto-update version –enable-autoupgrade . We will also set auto-assuring the number of nodes if there are not enough resources: –num-nodes = 1 –min-nodes = 1 –max-nodes = 2 –enable-autoscaling .

      Now let's talk about virtual cores and RAM. By default, the n1-standart-1 machine is raised , which has one virtual core and 3.5Gb of RAM, in triplicate, which together gives three virtual cores and 10.5Gb of RAM. It is important that the cluster has only at least two virtual processor cores, otherwise, formally, according to the limits for Kubernetes system containers, they will not be enough for full operation (containers, for example, system containers, may not rise). I will take two nodes, one core each and the total number of cores will be two. The same situation is with RAM, 1Gb (1024Mb) of RAM was enough for me to raise a container with NGINX, but to raise a container with LAMP (Apache MySQL PHP) is no longer there, the system service kube-dns-548976df6c- mlljx , which is responsible for DNS in the pod. Despite the fact that it is not vitally important and will not be useful to us, the next time it may not rise up a more important one instead. It is important to note that my cluster with 1Gb was normally raised and everything was fine, my total volume of 2Gb turned out to be a borderline value. I set 1080Mb (1.25Gb), taking into account that the minimum level of RAM is 256Mb (0.25Gb) and my volume must be a multiple of it and be at least 1Gb for one core. As a result, the cluster has 2 cores and 2.5Gb instead of 3 cores and 10.5Gb, which is a significant optimization of resources and prices on a paid account.

      Now we need to connect to the server. We already have the key on the server $ {HOME} /. Kube / config and now we just need to log in:

      $ gcloud container clusters get-credentials b –zone europe-north1-a –project essch

      $ kubectl port-forward Nginxlamp-74c8b5b7f-d2rsg 8080: 8080

      Forwarding from 127.0.0.1:8080 -> 8080

      Forwarding from [:: 1]: 8080 -> 8080

      $ google-chrome http: // localhost: 8080 # this won't work in Google Shell

      $ kubectl expose Deployment Nginxlamp –type = "LoadBalancer" –port = 8080

      To use kubectl locally, you need to install gcloud and use it to install kubectl using the gcloud components install kubectl command , but let's not complicate the first steps for now.

      In the Services section of the admin panel, POD will be available not only through the front-end balancer service, but also through the internal balancer Deployment. Although it will be saved after the re-creation, the config is more maintainable and obvious.

      It is also possible to make it possible to adjust the number of nodes in automatic mode depending on the load, for example, the number of containers with established resource requirements, using the keys –enable-autoscaling –min-nodes = 1 –max-nodes = 2 .

      Simple cluster in GCP

      There are two ways to create a cluster: through the Google Cloud Platform graphical interface or through its API with the gcloud command. Let's see how this can be done through the UI. Next to the menu, click on the drop-down list and create a separate project. In the Kubernetes Engine section, choose to create a cluster. Let's give the name, 2CPU, the europe-north-1 zone (the data center in Finland is closest to St. Petersburg) and the latest version of Kubernetes. After creating the cluster, click on connect and select Cloud Shell. To create through the API, click the button in the upper right corner to display the console panel and enter in it:

      gcloud container clusters create mycluster –zone europe-north1-a

      After a while, it took me two and a half minutes, 3 virtual machines will be raised, the operating system is installed on them and the disk is mounted. Let's check:

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

      NAME LOCATION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

      mycluster europe-north1-a 35.228.37.100 n1-standard-1 1.10.9-gke.5 3 RUNNING

      esschtolts @ cloudshell: ~ (essch) $ gcloud compute instances list

      NAME MACHINE_TYPE EXTERNAL_IP STATUS

      gke-mycluster-default-pool-43710ef9-0168 n1-standard-1 35.228.73.217 RUNNING

      gke-mycluster-default-pool-43710ef9-39ck n1-standard-1 35.228.75.47 RUNNING

      gke-mycluster-default-pool-43710ef9-g76k n1-standard-1 35.228.117.209 RUNNING

      Let's connect to the virtual machine:

      esschtolts @ cloudshell: ~ (essch) $ gcloud projects list

      PROJECT_ID NAME PROJECT_NUMBER

      agile-aleph-203917 My First Project 546748042692

      essch app 283762935665

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters get-credentials mycluster \

      –-zone europe-north1-a \

      –-project essch

      Fetching cluster endpoint and auth data.

      kubeconfig entry generated for mycluster.

      We don't have a cluster yet:

      esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

      No

Скачать книгу