IT Cloud. Eugeny Shtoltc
Чтение книги онлайн.
Читать онлайн книгу IT Cloud - Eugeny Shtoltc страница 45
essh @ kubernetes-master: ~ / node-cluster $ cat Kubernetes / outputs.tf
output "endpoint" {
value = google_container_cluster.node-ks.endpoint
sensitive = true
}
output "name" {
value = google_container_cluster.node-ks.name
sensitive = true
}
output "cluster_ca_certificate" {
value = base64decode (google_container_cluster.node-ks.master_auth.0.cluster_ca_certificate)
}
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
// module "kubernetes" {
// source = "ESSch / kubernetes / google"
// version = "0.0.2"
//
// project_name = "node-cluster-243923"
// region = "europe-west2"
//}
provider "google" {
credentials = file ("./ kubernetes_key.json")
project = "node-cluster-243923"
region = "europe-west2"
}
module "Kubernetes" {
source = "./Kubernetes"
project_name = "node-cluster-243923"
region = "europe-west2"
}
module "nodejs" {
source = "./nodejs"
endpoint = module.Kubernetes.endpoint
cluster_ca_certificate = module.Kubernetes.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / variable.tf
variable "endpoint" {}
variable "cluster_ca_certificate" {}
To check the balancing of traffic from all nodes, start NGINX, replacing the standard page with the hostname. We'll replace it with a simple command call and resume the server. To start the server, let's look at its call in the Dockerfile: CMD ["Nginx", "-g", "daemon off;"] , which is equivalent to calling Nginx -g 'daemon off;' at the command line. As you can see, the Dockerfile does not use BASH as an environment for launching, but starts the server itself, which allows the shell to live in the event of a process crash, preventing the container from crashing and re-creating. But for our experiments, BASH is fine:
essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it Nginx: 1.17.0 which Nginx
/ usr / sbin / Nginx
sudo docker run -it –rm -p 8333: 80 Nginx: 1.17.0 / bin / bash -c "echo \ $ HOSTNAME> /usr/share/Nginx/html/index2.html && / usr / sbin / Nginx – g 'daemon off;' "
Now let's create our PODs in triplicate with NGINX, which Kubernetes will try to distribute to different servers by default. Let's also add the service as a balancer:
essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf
terraform {
required_version = "> = 0.12.0"
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = var.endpoint
token = data.google_client_config.default.access_token
cluster_ca_certificate = var.cluster_ca_certificate
load_config_file = false
}
essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf
resource "kubernetes_deployment" "nodejs" {
metadata {
name = "terraform-nodejs"
labels = {
app = "NodeJS"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "NodeJS"
}
}
template {
metadata {
labels = {
app = "NodeJS"
}
}
spec {
container {
image = "Nginx: 1.17.0"
name = "node-js"
command = ["/ bin / bash"]
args = ["-c", "echo $ HOSTNAME> /usr/share/Nginx/html/index.html && / usr / sbin / Nginx -g 'daemon off;'"]
}
}
}
}
}
resource "kubernetes_service" "nodejs" {
metadata {
name = "terraform-nodejs"
}
spec {
selector = {
app = kubernetes_deployment.nodejs.metadata.0.labels.app
}
port {
port = 80
target_port = var.target_port
}
type = "LoadBalancer"
}