IT Cloud. Eugeny Shtoltc

Чтение книги онлайн.

Читать онлайн книгу IT Cloud - Eugeny Shtoltc страница 39

IT Cloud - Eugeny Shtoltc

Скачать книгу

{

      container {

      image = "Nginx: 1.7.9"

      name = "Nginx"

      port {

      container_port = 80

      }

      }

      }

      }

      Commands:

      terraform init # downloading dependencies according to configs, checking them

      terraform validate # syntax check

      terraform plan # to see in detail how the infrastructure will be changed and why exactly so, for example,

      whether only the service meta information will be changed or the service itself will be re-created, which is often unacceptable for databases.

      terraform apply # applying changes

      The common part for all providers is the core.

      $ which aws

      $ aws fonfigure # https://www.youtube.com/watch?v=IxA1IPypzHs

      $ cat aws.tf

      # https://www.terraform.io/docs/providers/aws/r/instance.html

      resource "aws_instance" "ec2instance" {

      ami = "$ {var.ami}"

      instance_type = "t2.micro"

      }

      resource "aws_security_group" "instance_gc" {

      …

      }

      $ cat run.js

      export AWS_ACCESS_KEY_ID = "anaccesskey"

      export AWS_SECRET_ACCESS_KEY = "asecretkey"

      export AWS_DEFAULT_REGION = "us-west-2"

      terraform plan

      terraform apply

      $ cat gce.tf # https://www.terraform.io/docs/providers/google/index.html#

      # Google Cloud Platform Provider

      provider "google" {

      credentials = "$ {file (" account.json ")}"

      project = "phalcon"

      region = "us-central1"

      }

      #https: //www.terraform.io/docs/providers/google/r/app_engine_application.html

      resource "google_project" "my_project" {

      name = "My Project"

      project_id = "your-project-id"

      org_id = "1234567"

      }

      resource "google_app_engine_application" "app" {

      project = "$ {google_project.my_project.project_id}"

      location_id = "us-central"

      }

      # google_compute_instance

      resource "google_compute_instance" "default" {

      name = "test"

      machine_type = "n1-standard-1"

      zone = "us-central1-a"

      tags = ["foo", "bar"]

      boot_disk {

      initialize_params {

      image = "debian-cloud / debian-9"

      }

      }

      // Local SSD disk

      scratch_disk {

      }

      network_interface {

      network = "default"

      access_config {

      // Ephemeral IP

      }

      }

      metadata = {

      foo = "bar"

      }

      metadata_startup_script = "echo hi> /test.txt"

      service_account {

      scopes = ["userinfo-email", "compute-ro", "storage-ro"]

      }

      }

      Extensibility using an external resource, which can be a BASH script:

      data "external" "python3" {

      program = ["Python3"]

      }

      Building a cluster of machines with Terraform

      Clustering with Terraform is covered in Building Infrastructure in GCP. Now let's pay more attention to the cluster itself, and not to the tools for creating it. I will create a project through the GCE admin panel (displayed in the interface header) node-cluster. I downloaded the key for Kubernetes IAM and administration -> Service accounts -> Create a service account and when creating it, I selected the Owner role and put it in a project called kubernetes_key.JSON:

      eSSH @ Kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON

      Downloaded terraform:

      essh @ kubernetes-master: ~ / node-cluster $ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip> / dev / null 2> / dev / null

      essh @ kubernetes-master: ~ / node-cluster $ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip

      Archive: terraform_0.12.2_linux_amd64.zip

      inflating: terraform

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform version

      Terraform v0.12.2

      Added the GCE provider and started downloading the "drivers" to it:

      essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

      provider "google" {

      credentials = "$ {file (" kubernetes_key.json ")}"

      project

Скачать книгу