BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

HashiCorp Vault and Consul on AWS with Terraform

Terraform-Icon.png Vault_Icon.png




Bookmark and Share





bogotobogo.com site search:


Note

This post is based on vault-guides/operations/provision-vault/quick-start/terraform-aws/, and not many new things are added. The guide given as an output from terraform apply is good in general but I feel it would have been better if we have more detailed one. So, here you go.


repo:Vault-Consul-on-AWS-with-Terraform


We'll be using network_aws, vault_aws, and consul_aws modules.


If you like to have Vault and Consul containerized, please check out these:

  1. Docker Compose - Hashicorp's Vault and Consul Part A (install vault, ing, static secrets, and policies)
  2. Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
  3. Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
  4. Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
  5. Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine





Setup

At the end, we'll have the following:

  1. instances:

    vault-cluster-instances.png

  2. Two load balancers:

    two-LBs.png

  3. Vault LB Listeners:

    vault-listeners.png

  4. Consul LB Listeners:

    consul-listeners.png

  5. Targets in Vault LB Listeners:

    vault-tg.png

  6. Targets in Consul LB Listeners:

    consul-tg.png

  7. Auto scaling groups:

    vault-asg.png

The provider is defined in variables.tf:

variable "provider"     { default = "aws" }

The AWS provider offers a flexible means of providing credentials for authentication. Among them, we'll use Environment variables:

$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
$ export AWS_DEFAULT_REGION="us-ease-1"

The number of instances (ASG) is dependent on the number of subnets:

desired_capacity     = "${var.count != -1 ? var.count : length(var.subnet_ids)}"





Terraform init

If no arguments are given, the configuration in the current working directory is initialized.

The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration.

$ terraform init
Initializing modules...
- module.network_aws
- module.consul_aws
- module.vault_aws
- module.network_aws.consul_auto_join_instance_role
- module.network_aws.ssh_keypair_aws
- module.network_aws.bastion_consul_client_sg
- module.network_aws.ssh_keypair_aws.tls_private_key
- module.consul_aws.consul_auto_join_instance_role
- module.consul_aws.consul_server_sg
- module.consul_aws.consul_lb_aws
- module.consul_aws.consul_server_sg.consul_client_ports_aws
- module.vault_aws.consul_auto_join_instance_role
- module.vault_aws.vault_server_sg
- module.vault_aws.consul_client_sg
- module.vault_aws.vault_lb_aws

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.11"
* provider.null: version = "~> 2.1"
* provider.random: version = "~> 2.1"
* provider.template: version = "~> 2.1"
* provider.tls: version = "~> 2.0"

Terraform has been successfully initialized!
...





Terraform plan

The terraform plan command is used to create an execution plan. Terraform performs a refresh and then determines what actions are necessary to achieve the desired state specified in the configuration files.

terraform plan command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.base_install: Refreshing state...
data.template_file.consul_install: Refreshing state...
data.template_file.vault_install: Refreshing state...
data.template_file.bastion: Refreshing state...
data.template_file.vault: Refreshing state...
data.template_file.vault_init: Refreshing state...
data.template_file.bastion_init: Refreshing state...
data.aws_elb_service_account.vault_lb_access_logs: Refreshing state...
data.aws_ami.base: Refreshing state...
data.aws_iam_policy_document.consul: Refreshing state...
data.aws_availability_zones.main: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
data.aws_elb_service_account.consul_lb_access_logs: Refreshing state...
data.aws_iam_policy_document.consul: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
data.aws_iam_policy_document.assume_role: Refreshing state...
data.aws_iam_policy_document.consul: Refreshing state...
...





Terraform apply

The terraform apply command is used to apply the changes required to reach the desired state of the configuration:

$ terraform apply
...
Outputs:

bastion_ips_public = [
    18.207.107.201
]
...
bastion_username = ec2-user
...
consul_lb_dns = consul-lb-e16b91cb-2137033131.us-east-1.elb.amazonaws.com
...
vault_lb_dns = vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com
...

Here is additional output from the terraform apply command:

# ------------------------------------------------------------------------------
# vault-cluster Network
# ------------------------------------------------------------------------------
A private RSA key has been generated and downloaded locally. The file
permissions have been changed to 0600 so the key can be used immediately for
SSH or scp.

If you're not running Terraform locally (e.g. in TFE or Jenkins) but are using
remote state and need the private key locally for SSH, run the below command to
download.

  $ echo "$(terraform output private_key_pem)" \
      > vault-cluster-346cee5b.key.pem \
      && chmod 0600 vault-cluster-346cee5b.key.pem

Run the below command to add this private key to the list maintained by
ssh-agent so you're not prompted for it when using SSH or scp to connect to
hosts with your public key.

  $ ssh-add vault-cluster-346cee5b.key.pem

The public part of the key loaded into the agent ("public_key_openssh" output)
has been placed on the target system in ~/.ssh/authorized_keys.
  
  $ ssh -A -i vault-cluster-346cee5b.key.pem ec2-user@18.207.107.201

To force the generation of a new key, the private key instance can be "tainted"
using the below command if the private key was not overridden.

  $ terraform taint -module=network_aws.ssh_keypair_aws.tls_private_key \
      tls_private_key.key

# ------------------------------------------------------------------------------
# Local HTTP API Requests
# ------------------------------------------------------------------------------

If you're making HTTP API requests outside the Bastion (locally), set
the below env vars.

The `vault_public` and `consul_public` variables must be set to true for
requests to work.

`vault_public`: 1
`consul_public`: 1

  $ export VAULT_ADDR=http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com:8200
  $ export CONSUL_ADDR=http://consul-lb-e16b91cb-2137033131.us-east-1.elb.amazonaws.com:8500

# ------------------------------------------------------------------------------
# Vault Cluster
# ------------------------------------------------------------------------------

Once on the Bastion host, you can use Consul's DNS functionality to seamlessly
SSH into other Consul or Nomad nodes if they exist.

  $ ssh -A ec2-user@consul.service.consul

  # Vault must be initialized & unsealed for this command to work
  $ ssh -A ec2-user@vault.service.consul

# ------------------------------------------------------------------------------
# vault-cluster Vault Dev Guide Setup
# ------------------------------------------------------------------------------

If you're following the "Dev Guide" with the provided defaults, Vault is
running in -dev mode and using the in-memory storage backend.

The Root token for your Vault -dev instance has been set to "root" and placed in
`/srv/vault/.vault-token`, the `VAULT_TOKEN` environment variable has already
been set by default.

  $ echo ${VAULT_TOKEN} # Vault Token being used to authenticate to Vault
  $ sudo cat /srv/vault/.vault-token # Vault Token has also been placed here

If you're using a storage backend other than in-mem (-dev mode), you will need
to initialize Vault using steps 2 & 3 below.

# ------------------------------------------------------------------------------
# vault-cluster Vault Quick Start/Best Practices Guide Setup
# ------------------------------------------------------------------------------

If you're following the "Quick Start Guide" or "Best Practices" guide, you won't
be able to start interacting with Vault from the Bastion host yet as the Vault
server has not been initialized & unsealed. Follow the below steps to set this
up.

1.) SSH into one of the Vault servers registered with Consul, you can use the
below command to accomplish this automatically (we'll use Consul DNS moving
forward once Vault is unsealed).

  $ ssh -A ec2-user@$(curl http://127.0.0.1:8500/v1/agent/members | jq -M -r \
      '[.[] | select(.Name | contains ("vault-cluster-vault")) | .Addr][0]')

2.) Initialize Vault

  $ vault operator init

3.) Unseal Vault using the "Unseal Keys" output from the `vault init` command
and check the seal status.

  $ vault operator unseal <UNSEAL_KEY_1>
  $ vault operator unseal <UNSEAL_KEY_2>
  $ vault operator unseal <UNSEAL_KEY_3>
  $ vault status

Repeat steps 1.) and 3.) to unseal the other "standby" Vault servers as well to
achieve high availablity.

4.) Logout of the Vault server (ctrl+d) and check Vault's seal status from the
Bastion host to verify you can interact with the Vault cluster from the Bastion
host Vault CLI.

  $ vault status

# ------------------------------------------------------------------------------
# vault-cluster Vault Getting Started Instructions
# ------------------------------------------------------------------------------

You can interact with Vault using any of the
CLI (https://www.vaultproject.io/docs/commands/index.html) or
API (https://www.vaultproject.io/api/index.html) commands.

Vault UI: http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com (Public)

The Vault nodes are in a public subnet with UI & SSH access open from the
internet. WARNING - DO NOT DO THIS IN PRODUCTION!

To start interacting with Vault, set your Vault token to authenticate requests.

If using the "Vault Dev Guide", Vault is running in -dev mode & this has been set
to "root" for you. Otherwise we will use the "Initial Root Token" that was output
from the `vault operator init` command.

  $ echo ${VAULT_ADDR} # Address you will be using to interact with Vault
  $ echo ${VAULT_TOKEN} # Vault Token being used to authenticate to Vault
  $ export VAULT_TOKEN= # If Vault token has not been set

Use the CLI to write and read a generic secret.

  $ vault kv put secret/cli foo=bar
  $ vault kv get secret/cli

Use the HTTP API with Consul DNS to write and read a generic secret with
Vault's KV secret engine.

If you're making HTTP API requests to Vault from the Bastion host,
the below env var has been set for you.

  $ export VAULT_ADDR=http://vault.service.vault:8200

  $ curl \
      -H "X-Vault-Token: ${VAULT_TOKEN}" \
      -X POST \
      -d '{"data": {"foo":"bar"}}' \
      ${VAULT_ADDR}/v1/secret/data/api | jq '.' # Write a KV secret
  $ curl \
      -H "X-Vault-Token: ${VAULT_TOKEN}" \
      ${VAULT_ADDR}/v1/secret/data/api | jq '.' # Read a KV secret

# ------------------------------------------------------------------------------
# vault-cluster Consul
# ------------------------------------------------------------------------------

You can now interact with Consul using any of the CLI
(https://www.consul.io/docs/commands/index.html) or
API (https://www.consul.io/api/index.html) commands.

Consul UI: consul-lb-e16b91cb-2137033131.us-east-1.elb.amazonaws.com (Public)

The Consul nodes are in a public subnet with UI & SSH access open from the
internet. WARNING - DO NOT DO THIS IN PRODUCTION!

Use the CLI to retrieve the Consul members, write a key/value, and read
that key/value.

  $ consul members # Retrieve Consul members
  $ consul kv put cli bar=baz # Write a key/value
  $ consul kv get cli # Read a key/value

Use the HTTP API to retrieve the Consul members, write a key/value,
and read that key/value.

If you're making HTTP API requests to Consul from the Bastion host,
the below env var has been set for you.

  $ export CONSUL_ADDR=http://127.0.0.1:8500

  $ curl \
      -X GET \
      ${CONSUL_ADDR}/v1/agent/members | jq '.' # Retrieve Consul members
  $ curl \
      -X PUT \
      -d 'bar=baz' \
      ${CONSUL_ADDR}/v1/kv/api | jq '.' # Write a KV
  $ curl \
      -X GET \
      ${CONSUL_ADDR}/v1/kv/api | jq '.' # Read a KV

Our Vault nodes are in a public subnet with UI & SSH access open from the internet and this is not recommended for production.

So, we can access using Vault UI: http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com:

initialize-vault.png

We need the private key locally for SSH to Bastion. Let's run the below in the Terraform root directory to download:

$ echo "$(terraform output private_key_pem)" \
      > vault-cluster-346cee5b.key.pem \
      && chmod 0600 vault-cluster-346cee5b.key.pem

We can log on to the Bastion using the downloaded key:

$ ssh -i vault-cluster-346cee5b.key.pem ec2-user@18.207.107.201
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ export vault_public=1
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ export consul_public=1
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ echo $vault_public
1
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ export VAULT_ADDR=http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com:8200
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ export CONSUL_ADDR=http://consul-lb-e16b91cb-2137033131.us-east-1.elb.amazonaws.com:8500
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 .ssh]$ ssh -A ec2-user@consul.service.consul
The authenticity of host 'consul.service.consul (10.139.1.121)' can't be established.
ECDSA key fingerprint is 87:f1:3a:53:f0:f1:21:e6:c6:0d:0f:63:64:13:8a:bf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'consul.service.consul,10.139.1.121' (ECDSA) to the list of known hosts.

[ec2-user@vault-cluster-consul-i-077ea838d52e59b01 ~]$ ssh -A ec2-user@vault.service.consul
ssh: Could not resolve hostname vault.service.consul: Name or service not known





Vault cluster setup - Initialization

Now that we're on the Bastion, we're ready to setup the Vault cluster.

Note that we won't be able to start interacting with Vault from the Bastion host yet as the Vault server has not been initialized & unsealed.

Follow the below steps to set this up.

  1. SSH into one of the Vault servers registered with Consul, we can use the below command to accomplish this automatically (we'll use Consul DNS moving forward once Vault is unsealed):

    $ ssh -A ec2-user@$(curl http://127.0.0.1:8500/v1/agent/members | jq -M -r \
          '[.[] | select(.Name | contains ("vault-cluster-vault")) | .Addr][0]')
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  2698    0  2698    0     0   621k      0 --:--:-- --:--:-- --:--:--  658k
    The authenticity of host '10.139.2.85 (10.139.2.85)' can't be established.
    ECDSA key fingerprint is fc:fc:17:8a:96:e3:dc:db:5d:e3:18:33:b3:d0:ae:7a.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '10.139.2.85' (ECDSA) to the list of known hosts.
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ 
    

  2. Initialize Vault:

    $ vault operator init
    Unseal Key 1: V/knpsNkYT9mt/D000UQ8TTdyUcAIZXffuwUlN9/mgGA
    Unseal Key 2: U1YIP0kuXVDQmNbsSyRNGAs6q6eurQsHIR/bBfdSQ2xi
    Unseal Key 3: FM39o3AM4ohdIJy0T8LfwlXJJBJcouDIKWAPI2/WoU3H
    Unseal Key 4: abr3JNbTSvAbE6NP9UTewDgZQ3eSFjGoAhQgx/Kzan7P
    Unseal Key 5: vXKiY08FCjUBJz1Fo6TpbqBll7xg776y+VtmlTxSPOPw
    
    Initial Root Token: 7qaONQY0KF35OsAbKSnqYJFj
    
    Vault initialized with 5 key shares and a key threshold of 3. Please securely
    distribute the key shares printed above. When the Vault is re-sealed,
    restarted, or stopped, you must supply at least 3 of these keys to unseal it
    before it can start servicing requests.
    
    Vault does not store the generated master key. Without at least 3 key to
    reconstruct the master key, Vault will remain permanently sealed!
    
    It is possible to generate new unseal keys, provided you have a quorum of
    existing unseal keys shares. See "vault operator rekey" for more information.
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ 
    

  3. Vault starts in a sealed state. Vault is unsealed by providing the unseal keys. By default, Vault uses a technique known as Shamir's secret sharing algorithm to split the master key into 5 shares, any 3 of which are required to reconstruct the master key.

    Unseal Vault using the "Unseal Keys" output from the vault operator init command and check the seal status:

    # vault operator unseal <UNSEAL_KEY_1>
    # vault operator unseal <UNSEAL_KEY_2>
    # vault operator unseal <UNSEAL_KEY_3>
    
    
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ vault operator unseal V/knpsNkYT9mt/D000UQ8TTdyUcAIZXffuwUlN9/mgGA
    Key                Value
    ---                -----
    Seal Type          shamir
    Initialized        true
    Sealed             true
    Total Shares       5
    Threshold          3
    Unseal Progress    1/3
    Unseal Nonce       7da61724-24b4-7f29-e15c-55d63931f00d
    Version            0.11.3
    HA Enabled         true
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ 
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ vault operator unseal U1YIP0kuXVDQmNbsSyRNGAs6q6eurQsHIR/bBfdSQ2xi
    Key                Value
    ---                -----
    Seal Type          shamir
    Initialized        true
    Sealed             true
    Total Shares       5
    Threshold          3
    Unseal Progress    2/3
    Unseal Nonce       7da61724-24b4-7f29-e15c-55d63931f00d
    Version            0.11.3
    HA Enabled         true
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ 
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ vault operator unseal FM39o3AM4ohdIJy0T8LfwlXJJBJcouDIKWAPI2/WoU3H
    Key                    Value
    ---                    -----
    Seal Type              shamir
    Initialized            true
    Sealed              false
    Total Shares           5
    Threshold              3
    Version                0.11.3
    Cluster Name           vault-cluster
    Cluster ID             24315b0e-eacc-a223-46af-80be7cf26110
    HA Enabled             true
    HA Cluster             n/a
    HA Mode                standby
    Active Node Address    <none>
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ 
    
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$ vault status
    Key             Value
    ---             -----
    Seal Type       shamir
    Initialized     true
    Sealed          false
    Total Shares    5
    Threshold       3
    Version         0.11.3
    Cluster Name    vault-cluster
    Cluster ID      24315b0e-eacc-a223-46af-80be7cf26110
    HA Enabled      true
    HA Cluster      https://10.139.2.85:8201
    HA Mode         active
    [ec2-user@vault-cluster-vault-i-0dcc43087309e7b72 ~]$
    

    Once Vault retrieves the encryption key, it is able to decrypt the data in the storage backend, and enters the unsealed state. Once unsealed, Vault loads all of the configured audit devices, auth methods, and secrets engines.

    Repeat steps 1.) and 3.) to unseal the other "standby" Vault servers as well to achieve high availability.


  4. Logout of the Vault server (ctrl+d) and check Vault's seal status from the Bastion host to verify you can interact with the Vault cluster from the Bastion host Vault CLI.

    [ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ vault status
    Key             Value
    ---             -----
    Seal Type       shamir
    Initialized     true
    Sealed          false
    Total Shares    5
    Threshold       3
    Version         0.11.3
    Cluster Name    vault-cluster
    Cluster ID      24315b0e-eacc-a223-46af-80be7cf26110
    HA Enabled      true
    HA Cluster      https://10.139.2.85:8201
    HA Mode         active
    




Interacting with Vault

Browse into the Vault via http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com (Public) and sign in with the Initial Root Token that was output from the vault operator init command:

initialize-vault.png

Click "Sign In":

logged_in_secrets_engines.png

Still on the Bastion, use the CLI to write and read a generic secret:

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ vault kv put secret/cli foo=bar
Error making API request.

URL: GET http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com:8200/v1/sys/internal/ui/mounts/secret/cli
Code: 500. Errors:

* missing client token

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ echo ${VAULT_ADDR}
http://vault-lb-f6f3c499-1158884718.us-east-1.elb.amazonaws.com:8200

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ echo ${VAULT_TOKEN}

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ export VAULT_TOKEN=7qaONQY0KF35OsAbKSnqYJFj
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ echo ${VAULT_TOKEN}
7qaONQY0KF35OsAbKSnqYJFj
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ vault kv put secret/cli foo=bar
Success! Data written to: secret/cli

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ vault kv get secret/cli
=== Data ===
Key    Value
---    -----
foo    bar

Let's use the HTTP API with Consul DNS to write and read a generic secret with Vault's KV secret engine. Since we're making HTTP API requests to Vault from the Bastion host, the below env var has been set for us.

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ export VAULT_ADDR=http://vault.service.consul:8200

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ curl \
      -H "X-Vault-Token: ${VAULT_TOKEN}" \
      -X POST \
      -d '{"data": {"foo":"bar"}}' \
      ${VAULT_ADDR}/v1/secret/data/api | jq '.' # Write a KV secret
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23    0     0  100    23      0   1706 --:--:-- --:--:-- --:--:--  1916

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ curl \
      -H "X-Vault-Token: ${VAULT_TOKEN}" \
      ${VAULT_ADDR}/v1/secret/data/api | jq '.' # Read a KV secret
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   186  100   186    0     0  29561      0 --:--:-- --:--:-- --:--:-- 31000
{
  "request_id": "19a64c35-01b0-1b76-88d1-16b6f0bc9ac8",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "data": {
      "foo": "bar"
    }
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}
[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ 




Interacting with Consul

We can also interact with Consul using consul-lb-e16b91cb-2137033131.us-east-1.elb.amazonaws.com (Public):

consul-services-with-two-vaults-standby.png
consul-services-with-two-vaults-standby-detail.png

Note that we unsealed only one vault.

Note also that the Consul nodes are in a public subnet with UI & SSH access open from the internet, which we do not want for production.

Use the CLI to retrieve the Consul members, write a key/value, and read that key/value.

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ consul members # Retrieve Consul members
Node                                         Address            Status  Type    Build  Protocol  DC             Segment
vault-cluster-consul-i-0598ca7d1f3052fa0     10.139.3.76:8301   alive   server  1.2.3  2         vault-cluster  <all>
vault-cluster-consul-i-070da8ab1d381e5a1     10.139.2.230:8301  alive   server  1.2.3  2         vault-cluster  <all>
vault-cluster-consul-i-077ea838d52e59b01     10.139.1.121:8301  alive   server  1.2.3  2         vault-cluster  <all>
vault-cluster-bastion-1-i-005f5eb055bd04392  10.139.1.205:8301  alive   client  1.2.3  2         vault-cluster  <default>
vault-cluster-vault-i-00b8555fac7c3317d      10.139.1.177:8301  alive   client  1.2.3  2         vault-cluster  <default>
vault-cluster-vault-i-0af23b81ba5e1f0fe      10.139.3.66:8301   alive   client  1.2.3  2         vault-cluster  <default>
vault-cluster-vault-i-0dcc43087309e7b72      10.139.2.85:8301   alive   client  1.2.3  2         vault-cluster  <default>

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ consul kv put cli bar=baz # Write a key/value
Success! Data written to: cli

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ consul kv get cli # Read a key/value
bar=baz

Use the HTTP API to retrieve the Consul members, write a key/value, and read that key/value.

If we're making HTTP API requests to Consul from the Bastion host:


[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ export CONSUL_ADDR=http://127.0.0.1:8500

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ curl \
      -X GET \
      ${CONSUL_ADDR}/v1/agent/members | jq '.' # Retrieve Consul members
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2698    0  2698    0     0  3654k      0 --:--:-- --:--:-- --:--:-- 2634k
[
  {
    "Name": "vault-cluster-vault-i-0dcc43087309e7b72",
    "Addr": "10.139.2.85",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "id": "f954fd18-0a15-8c1b-3c04-6bb45263ddea",
      "role": "node",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-consul-i-077ea838d52e59b01",
    "Addr": "10.139.1.121",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "expect": "3",
      "id": "b5534150-c7c1-cd11-9c4f-07681ff31f6d",
      "port": "8300",
      "raft_vsn": "3",
      "role": "consul",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2",
      "wan_join_port": "8302"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-vault-i-00b8555fac7c3317d",
    "Addr": "10.139.1.177",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "id": "f8f98aef-ec21-bbe0-88b1-d90b09d9bbce",
      "role": "node",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-consul-i-070da8ab1d381e5a1",
    "Addr": "10.139.2.230",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "expect": "3",
      "id": "812efe9b-c50b-d9a5-3096-99860aa5ba4a",
      "port": "8300",
      "raft_vsn": "3",
      "role": "consul",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2",
      "wan_join_port": "8302"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-bastion-1-i-005f5eb055bd04392",
    "Addr": "10.139.1.205",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "id": "07a1bc60-834d-c70a-157a-7a6c0fe97bd3",
      "role": "node",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-vault-i-0af23b81ba5e1f0fe",
    "Addr": "10.139.3.66",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "id": "c4fb20c0-22e9-5485-6dcc-70049b803960",
      "role": "node",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  },
  {
    "Name": "vault-cluster-consul-i-0598ca7d1f3052fa0",
    "Addr": "10.139.3.76",
    "Port": 8301,
    "Tags": {
      "build": "1.2.3:48d287ef",
      "dc": "vault-cluster",
      "expect": "3",
      "id": "ccb6e4f4-aa4d-cb91-81b3-7af1d5380acd",
      "port": "8300",
      "raft_vsn": "3",
      "role": "consul",
      "segment": "",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2",
      "wan_join_port": "8302"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  }
]

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ curl \
      -X PUT \
      -d 'bar=baz' \
      ${CONSUL_ADDR}/v1/kv/api | jq '.' # Write a KV
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    11  100     4  100     7    631   1105 --:--:-- --:--:-- --:--:--  1166
true

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ curl \
      -X GET \
      ${CONSUL_ADDR}/v1/kv/api | jq '.' # Read a KV
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   100  100   100    0     0  39525      0 --:--:-- --:--:-- --:--:-- 50000
[
  {
    "LockIndex": 0,
    "Key": "api",
    "Flags": 0,
    "Value": "YmFyPWJheg==",
    "CreateIndex": 3042,
    "ModifyIndex": 3042
  }
]




Unseal the other two Vault servers

It is important to note that only unsealed servers act as a standby. If a server is still in the sealed state, then it cannot act as a standby as it would be unable to serve any requests should the active server fail.

By spreading traffic across performance standby nodes, clients can scale read-only IOPS horizontally to handle extremely high traffic workloads.

If a request comes into a Performance Standby Node that causes a storage write the request will be forwarded onto the active server. If the request is read-only the request will be serviced locally on the Performance Standby.

Just like traditional HA standbys if the active node is sealed, fails, or loses network connectivity then a performance standby can take over and become the active instance.


As we saw from the previous section, we have two Vaults sealed. Let's unseal the two so that the two Vaults can serve as standby.

On the Bastion, log on to the 2nd Vault:

[ec2-user@vault-cluster-bastion-1-i-005f5eb055bd04392 ~]$ ssh -A ec2-user@$(curl http://127.0.0.1:8500/v1/agent/members | jq -M -r \
       '[.[] | select(.Name | contains ("vault-cluster-vault")) | .Addr][1]')

Then, unseal:

[ec2-user@vault-cluster-vault-i-00b8555fac7c3317d ~]$ vault operator unseal V/knpsNkYT9mt/D000UQ8TTdyUcAIZXffuwUlN9/mgGA
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    1/3
Unseal Nonce       143cf709-1afc-3fb6-28eb-f6289411c696
Version            0.11.3
HA Enabled         true
[ec2-user@vault-cluster-vault-i-00b8555fac7c3317d ~]$ vault operator unseal U1YIP0kuXVDQmNbsSyRNGAs6q6eurQsHIR/bBfdSQ2xi
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    2/3
Unseal Nonce       143cf709-1afc-3fb6-28eb-f6289411c696
Version            0.11.3
HA Enabled         true
[ec2-user@vault-cluster-vault-i-00b8555fac7c3317d ~]$ vault operator unseal FM39o3AM4ohdIJy0T8LfwlXJJBJcouDIKWAPI2/WoU3H
Key                    Value
---                    -----
Seal Type              shamir
Initialized            true
Sealed                 false
Total Shares           5
Threshold              3
Version                0.11.3
Cluster Name           vault-cluster
Cluster ID             24315b0e-eacc-a223-46af-80be7cf26110
HA Enabled             true
HA Cluster             https://10.139.2.85:8201
HA Mode                standby
Active Node Address    http://10.139.2.85:8200

We can see we have two unsealed Vaults now:

unseal-2nd.png

Do the same to the last one. Login to the Vault server and then unseal:

[ec2-user@vault-cluster-vault-i-00b8555fac7c3317d ~]$ ssh -A ec2-user@10.139.3.66
The authenticity of host '10.139.3.66 (10.139.3.66)' can't be established.
ECDSA key fingerprint is a9:5c:20:4b:8b:02:29:0b:f3:cd:41:15:a0:79:40:1a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.139.3.66' (ECDSA) to the list of known hosts.
Last login: Fri May 24 20:49:13 2019 from ip-10-139-1-205.ec2.internal

[ec2-user@vault-cluster-vault-i-0af23b81ba5e1f0fe ~]$ vault operator unseal U1YIP0kuXVDQmNbsSyRNGAs6q6eurQsHIR/bBfdSQ2xi
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    1/3
Unseal Nonce       418cde46-ae02-ad7c-b696-5cb59e7e4e6e
Version            0.11.3
HA Enabled         true

[ec2-user@vault-cluster-vault-i-0af23b81ba5e1f0fe ~]$ vault operator unseal FM39o3AM4ohdIJy0T8LfwlXJJBJcouDIKWAPI2/WoU3H
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    2/3
Unseal Nonce       418cde46-ae02-ad7c-b696-5cb59e7e4e6e
Version            0.11.3
HA Enabled         true

[ec2-user@vault-cluster-vault-i-0af23b81ba5e1f0fe ~]$ vault operator unseal V/knpsNkYT9mt/D000UQ8TTdyUcAIZXffuwUlN9/mgGA
Key                    Value
---                    -----
Seal Type              shamir
Initialized            true
Sealed                 false
Total Shares           5
Threshold              3
Version                0.11.3
Cluster Name           vault-cluster
Cluster ID             24315b0e-eacc-a223-46af-80be7cf26110
HA Enabled             true
HA Cluster             https://10.139.2.85:8201
HA Mode                standby
Active Node Address    http://10.139.2.85:8200

Now all our Vault servers have been unsealed!

unseal-the-3rd.png
all-services.png






Terraform destroy

The terraform destroy command is used to destroy the Terraform-managed infrastructure:

$ terraform destroy
module.network_aws.aws_vpc.main: Destruction complete after 1s
module.consul_aws.module.consul_auto_join_instance_role.aws_iam_role.consul: Destruction complete after 1s

Destroy complete! Resources: 105 destroyed.


Terraform

  • Introduction to Terraform with AWS elb & nginx
  • Terraform Tutorial - terraform format(tf) and interpolation(variables)
  • Terraform Tutorial - user_data
  • Terraform Tutorial - variables
  • Terraform Tutorial - creating multiple instances (count, list type and element() function)
  • Terraform 12 Tutorial - Loops with count, for_each, and for
  • Terraform Tutorial - State (terraform.tfstate) & terraform import
  • Terraform Tutorial - Output variables
  • Terraform Tutorial - Destroy
  • Terraform Tutorial - Modules
  • Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue
  • Terraform Tutorial - AWS ASG and Modules
  • Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I
  • Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II
  • Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling
  • Terraform Tutorial - AWS ECS using Fargate : Part I
  • Hashicorp Vault
  • HashiCorp Vault Agent
  • HashiCorp Vault and Consul on AWS with Terraform
  • Ansible with Terraform
  • AWS IAM user, group, role, and policies - part 1
  • AWS IAM user, group, role, and policies - part 2
  • Delegate Access Across AWS Accounts Using IAM Roles
  • AWS KMS
  • terraform import & terraformer import
  • Terraform commands cheat sheet
  • Terraform Cloud
  • Terraform 14
  • Creating Private TLS Certs


  • Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

    YouTubeMy YouTube channel

    Sponsor Open Source development activities and free contents for everyone.

    Thank you.

    - K Hong







    Terraform



    Introduction to Terraform with AWS elb & nginx

    Terraform Tutorial - terraform format(tf) and interpolation(variables)

    Terraform Tutorial - user_data

    Terraform Tutorial - variables

    Terraform 12 Tutorial - Loops with count, for_each, and for

    Terraform Tutorial - creating multiple instances (count, list type and element() function)

    Terraform Tutorial - State (terraform.tfstate) & terraform import

    Terraform Tutorial - Output variables

    Terraform Tutorial - Destroy

    Terraform Tutorial - Modules

    Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

    Terraform Tutorial - AWS ASG and Modules

    Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

    Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

    Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

    Terraform Tutorial - AWS ECS using Fargate : Part I

    Hashicorp Vault

    HashiCorp Vault Agent

    HashiCorp Vault and Consul on AWS with Terraform

    Ansible with Terraform

    AWS IAM user, group, role, and policies - part 1

    AWS IAM user, group, role, and policies - part 2

    Delegate Access Across AWS Accounts Using IAM Roles

    AWS KMS

    terraform import & terraformer import

    Terraform commands cheat sheet

    Terraform Cloud

    Terraform 14

    Creating Private TLS Certs




    Sponsor Open Source development activities and free contents for everyone.

    Thank you.

    - K Hong







    DevOps



    Phases of Continuous Integration

    Software development methodology

    Introduction to DevOps

    Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

    Artifact repository and repository management

    Linux - General, shell programming, processes & signals ...

    RabbitMQ...

    MariaDB

    New Relic APM with NodeJS : simple agent setup on AWS instance

    Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

    Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

    Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

    Datadog - Monitoring with PagerDuty/HipChat and APM

    Install and Configure Mesos Cluster

    Cassandra on a Single-Node Cluster

    Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

    OpenStack install on Ubuntu 16.04 server - DevStack

    AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

    CI/CD with CircleCI - Heroku deploy

    Introduction to Terraform with AWS elb & nginx

    Docker & Kubernetes

    Kubernetes I - Running Kubernetes Locally via Minikube

    Kubernetes II - kops on AWS

    Kubernetes III - kubeadm on AWS

    AWS : EKS (Elastic Container Service for Kubernetes)

    CI/CD Github actions

    CI/CD Gitlab



    DevOps / Sys Admin Q & A



    (1A) - Linux Commands

    (1B) - Linux Commands

    (2) - Networks

    (2B) - Networks

    (3) - Linux Systems

    (4) - Scripting (Ruby/Shell)

    (5) - Configuration Management

    (6) - AWS VPC setup (public/private subnets with NAT)

    (6B) - AWS VPC Peering

    (7) - Web server

    (8) - Database

    (9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

    (10) - Trouble Shooting: Load, Throughput, Response time and Leaks

    (11) - SSH key pairs, SSL Certificate, and SSL Handshake

    (12) - Why is the database slow?

    (13) - Is my web site down?

    (14) - Is my server down?

    (15) - Why is the server sluggish?

    (16A) - Serving multiple domains using Virtual Hosts - Apache

    (16B) - Serving multiple domains using server block - Nginx

    (16C) - Reverse proxy servers and load balancers - Nginx

    (17) - Linux startup process

    (18) - phpMyAdmin with Nginx virtual host as a subdomain

    (19) - How to SSH login without password?

    (20) - Log Rotation

    (21) - Monitoring Metrics

    (22) - lsof

    (23) - Wireshark introduction

    (24) - User account management

    (25) - Domain Name System (DNS)

    (26) - NGINX SSL/TLS, Caching, and Session

    (27) - Troubleshooting 5xx server errors

    (28) - Linux Systemd: journalctl

    (29) - Linux Systemd: FirewallD

    (30) - Linux: SELinux

    (31) - Linux: Samba

    (0) - Linux Sys Admin's Day to Day tasks





    Ansible 2.0



    What is Ansible?

    Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

    SSH connection & running commands

    Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

    Modules

    Playbooks

    Handlers

    Roles

    Playbook for LAMP HAProxy

    Installing Nginx on a Docker container

    AWS : Creating an ec2 instance & adding keys to authorized_keys

    AWS : Auto Scaling via AMI

    AWS : creating an ELB & registers an EC2 instance from the ELB

    Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

    Setting up Apache web server

    Deploying a Go app to Minikube

    Ansible with Terraform





    Jenkins



    Install

    Configuration - Manage Jenkins - security setup

    Adding job and build

    Scheduling jobs

    Managing_plugins

    Git/GitHub plugins, SSH keys configuration, and Fork/Clone

    JDK & Maven setup

    Build configuration for GitHub Java application with Maven

    Build Action for GitHub Java application with Maven - Console Output, Updating Maven

    Commit to changes to GitHub & new test results - Build Failure

    Commit to changes to GitHub & new test results - Successful Build

    Adding code coverage and metrics

    Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

    Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

    Jenkins on EC2 - Creating a Maven project

    Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

    Jenkins on EC2 - Line Coverage with JaCoCo plugin

    Setting up Master and Slave nodes

    Jenkins Build Pipeline & Dependency Graph Plugins

    Jenkins Build Flow Plugin

    Pipeline Jenkinsfile with Classic / Blue Ocean

    Jenkins Setting up Slave nodes on AWS

    Jenkins Q & A





    Puppet



    Puppet with Amazon AWS I - Puppet accounts

    Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

    Puppet with Amazon AWS III - Puppet running Hello World

    Puppet Code Basics - Terminology

    Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

    Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

    Puppet master /agent ubuntu 14.04 install on EC2 nodes

    Puppet master post install tasks - master's names and certificates setup,

    Puppet agent post install tasks - configure agent, hostnames, and sign request

    EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

    Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

    EC2 Puppet - Install lamp with a manifest ('puppet apply')

    EC2 Puppet - Install lamp with a module

    Puppet variable scope

    Puppet packages, services, and files

    Puppet packages, services, and files II with nginx Puppet templates

    Puppet creating and managing user accounts with SSH access

    Puppet Locking user accounts & deploying sudoers file

    Puppet exec resource

    Puppet classes and modules

    Puppet Forge modules

    Puppet Express

    Puppet Express 2

    Puppet 4 : Changes

    Puppet --configprint

    Puppet with Docker

    Puppet 6.0.2 install on Ubuntu 18.04





    Chef



    What is Chef?

    Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

    Setting up Hosted Chef server

    VirtualBox via Vagrant with Chef client provision

    Creating and using cookbooks on a VirtualBox node

    Chef server install on Ubuntu 14.04

    Chef workstation setup on EC2 Ubuntu 14.04

    Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





    Docker & K8s



    Docker install on Amazon Linux AMI

    Docker install on EC2 Ubuntu 14.04

    Docker container vs Virtual Machine

    Docker install on Ubuntu 14.04

    Docker Hello World Application

    Nginx image - share/copy files, Dockerfile

    Working with Docker images : brief introduction

    Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

    More on docker run command (docker run -it, docker run --rm, etc.)

    Docker Networks - Bridge Driver Network

    Docker Persistent Storage

    File sharing between host and container (docker run -d -p -v)

    Linking containers and volume for datastore

    Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

    Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

    Dockerfile - Build Docker images automatically III - RUN

    Dockerfile - Build Docker images automatically IV - CMD

    Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

    Docker - Apache Tomcat

    Docker - NodeJS

    Docker - NodeJS with hostname

    Docker Compose - NodeJS with MongoDB

    Docker - Prometheus and Grafana with Docker-compose

    Docker - StatsD/Graphite/Grafana

    Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

    Docker : NodeJS with GCP Kubernetes Engine

    Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

    Docker : Jenkins Master and Slave

    Docker - ELK : ElasticSearch, Logstash, and Kibana

    Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

    Docker - ELK 7.6 : Logstash on Centos 7

    Docker - ELK 7.6 : Kibana on Centos 7 Part 1

    Docker - ELK 7.6 : Kibana on Centos 7 Part 2

    Docker - ELK 7.6 : Elastic Stack with Docker Compose

    Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

    Docker - Deploy Elastic Stack via Helm on minikube

    Docker Compose - A gentle introduction with WordPress

    Docker Compose - MySQL

    MEAN Stack app on Docker containers : micro services

    Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

    Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

    Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

    Docker Compose with two containers - Flask REST API service container and an Apache server container

    Docker compose : Nginx reverse proxy with multiple containers

    Docker compose : Nginx reverse proxy with multiple containers

    Docker & Kubernetes : Envoy - Getting started

    Docker & Kubernetes : Envoy - Front Proxy

    Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

    Docker Packer

    Docker Cheat Sheet

    Docker Q & A

    Kubernetes Q & A - Part I

    Kubernetes Q & A - Part II

    Docker - Run a React app in a docker

    Docker - Run a React app in a docker II (snapshot app with nginx)

    Docker - NodeJS and MySQL app with React in a docker

    Docker - Step by Step NodeJS and MySQL app with React - I

    Installing LAMP via puppet on Docker

    Docker install via Puppet

    Nginx Docker install via Ansible

    Apache Hadoop CDH 5.8 Install with QuickStarts Docker

    Docker - Deploying Flask app to ECS

    Docker Compose - Deploying WordPress to AWS

    Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

    Docker - ECS Fargate

    Docker - AWS ECS service discovery with Flask and Redis

    Docker & Kubernetes: minikube version: v1.31.2, 2023

    Docker & Kubernetes 1 : minikube

    Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

    Docker & Kubernetes 3 : minikube Django with Redis and Celery

    Docker & Kubernetes 4 : Django with RDS via AWS Kops

    Docker & Kubernetes : Kops on AWS

    Docker & Kubernetes : Ingress controller on AWS with Kops

    Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

    Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

    Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

    Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

    Docker & Kubernetes : DaemonSet

    Docker & Kubernetes : Secrets

    Docker & Kubernetes : kubectl command

    Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

    Docker & Kubernetes : Configure a Pod to Use a ConfigMap

    AWS : EKS (Elastic Container Service for Kubernetes)

    Docker & Kubernetes : Run a React app in a minikube

    Docker & Kubernetes : Minikube install on AWS EC2

    Docker & Kubernetes : Cassandra with a StatefulSet

    Docker & Kubernetes : Terraform and AWS EKS

    Docker & Kubernetes : Pods and Service definitions

    Docker & Kubernetes : Headless service and discovering pods

    Docker & Kubernetes : Service IP and the Service Type

    Docker & Kubernetes : Kubernetes DNS with Pods and Services

    Docker & Kubernetes - Scaling and Updating application

    Docker & Kubernetes : Horizontal pod autoscaler on minikubes

    Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

    Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

    Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

    Docker & Kubernetes : Rolling updates

    Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

    Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

    Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

    Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

    Docker & Kubernetes : Nginx Ingress Controller on minikube

    Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

    Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

    Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

    Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

    Docker & Kubernetes : MongoDB / MongoExpress on Minikube

    Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

    Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

    Docker & Kubernetes : StatefulSets on minikube

    Docker & Kubernetes : StatefulSets on minikube

    Docker & Kubernetes : RBAC

    Docker & Kubernetes Service Account, RBAC, and IAM

    Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

    Docker & Kubernetes : Helm Chart

    Docker & Kubernetes : My first Helm deploy

    Docker & Kubernetes : Readiness and Liveness Probes

    Docker & Kubernetes : Helm chart repository with Github pages

    Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

    Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

    Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

    Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

    Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

    Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

    Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

    Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

    Docker & Kubernetes : Istio on EKS

    Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

    Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

    Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

    Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

    Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

    Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

    Docker & Kubernetes : Spinnaker on EKS with Halyard

    Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

    Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

    Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

    Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

    Docker & Kubernetes : Jenkins-X on EKS

    Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

    Docker & Kubernetes : ArgoCD on Kubernetes cluster

    Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook





    Elasticsearch search engine, Logstash, and Kibana



    Elasticsearch, search engine

    Logstash with Elasticsearch

    Logstash, Elasticsearch, and Kibana 4

    Elasticsearch with Redis broker and Logstash Shipper and Indexer

    Samples of ELK architecture

    Elasticsearch indexing performance



    Vagrant



    VirtualBox & Vagrant install on Ubuntu 14.04

    Creating a VirtualBox using Vagrant

    Provisioning

    Networking - Port Forwarding

    Vagrant Share

    Vagrant Rebuild & Teardown

    Vagrant & Ansible





    Big Data & Hadoop Tutorials



    Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

    Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

    Hadoop - Running MapReduce Job

    Hadoop - Ecosystem

    CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

    CDH5 APIs

    QuickStart VMs for CDH 5.3

    QuickStart VMs for CDH 5.3 II - Testing with wordcount

    QuickStart VMs for CDH 5.3 II - Hive DB query

    Scheduled start and stop CDH services

    CDH 5.8 Install with QuickStarts Docker

    Zookeeper & Kafka Install

    Zookeeper & Kafka - single node single broker

    Zookeeper & Kafka - Single node and multiple brokers

    OLTP vs OLAP

    Apache Hadoop Tutorial I with CDH - Overview

    Apache Hadoop Tutorial II with CDH - MapReduce Word Count

    Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

    Apache Hadoop (CDH 5) Hive Introduction

    CDH5 - Hive Upgrade to 1.3 to from 1.2

    Apache Hive 2.1.0 install on Ubuntu 16.04

    Apache HBase in Pseudo-Distributed mode

    Creating HBase table with HBase shell and HUE

    Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

    Creating HBase table with Java API

    HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

    Flume with CDH5: a single-node Flume deployment (telnet example)

    Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

    List of Apache Hadoop hdfs commands

    Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

    Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

    Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

    Apache Hadoop : Creating Wordcount Maven Project with Eclipse

    Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

    Spark 1.2 using VirtualBox and QuickStart VM - wordcount

    Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

    Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

    Apache Spark 2.0.2 tutorial with PySpark : RDD

    Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

    Apache Spark Streaming with Kafka and Cassandra

    Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

    Apache Spark 1.2 Streaming

    Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

    Apache Drill - Query File System, JSON, and Parquet

    Apache Drill - HBase query

    Apache Drill - Hive query

    Apache Drill - MongoDB query





    Redis In-Memory Database



    Redis vs Memcached

    Redis 3.0.1 Install

    Setting up multiple server instances on a Linux host

    Redis with Python

    ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer



    GCP (Google Cloud Platform)



    GCP: Creating an Instance

    GCP: gcloud compute command-line tool

    GCP: Deploying Containers

    GCP: Kubernetes Quickstart

    GCP: Deploying a containerized web application via Kubernetes

    GCP: Django Deploy via Kubernetes I (local)

    GCP: Django Deploy via Kubernetes II (GKE)





    AWS (Amazon Web Services)



    AWS : EKS (Elastic Container Service for Kubernetes)

    AWS : Creating a snapshot (cloning an image)

    AWS : Attaching Amazon EBS volume to an instance

    AWS : Adding swap space to an attached volume via mkswap and swapon

    AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

    AWS : Creating an instance to a new region by copying an AMI

    AWS : S3 (Simple Storage Service) 1

    AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

    AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

    AWS : S3 (Simple Storage Service) 4 - Uploading a large file

    AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

    AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

    AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

    AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

    AWS : Creating a CloudFront distribution with an Amazon S3 origin

    AWS : Creating VPC with CloudFormation

    WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

    AWS : CloudWatch & Logs with Lambda Function / S3

    AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

    AWS : Lambda and SNS - cross account

    AWS : CLI (Command Line Interface)

    AWS : CLI (ECS with ALB & autoscaling)

    AWS : ECS with cloudformation and json task definition

    AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

    AWS : Load Balancing with HAProxy (High Availability Proxy)

    AWS : VirtualBox on EC2

    AWS : NTP setup on EC2

    AWS: jq with AWS

    AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

    AWS : OpenVPN Access Server 2 Install

    AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

    AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

    AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

    AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

    AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

    AWS : Autoscaling group (ASG)

    AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

    AWS : Adding a SSH User Account on Linux Instance

    AWS : Windows Servers - Remote Desktop Connections using RDP

    AWS : Scheduled stopping and starting an instance - python & cron

    AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

    AWS : Elastic Beanstalk with NodeJS

    AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

    AWS : Identity and Access Management (IAM) Roles for Amazon EC2

    AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

    AWS : Identity and Access Management (IAM) sts assume role via aws cli2

    AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

    AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

    AWS : Amazon Route 53

    AWS : Amazon Route 53 - DNS (Domain Name Server) setup

    AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

    AWS Amazon Route 53 : Private Hosted Zone

    AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

    AWS : Lambda with AWS CloudTrail

    AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

    AWS : Redshift data warehouse

    AWS : CloudFormation - templates, change sets, and CLI

    AWS : CloudFormation Bootstrap UserData/Metadata

    AWS : CloudFormation - Creating an ASG with rolling update

    AWS : Cloudformation Cross-stack reference

    AWS : OpsWorks

    AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

    AWS CodeDeploy : Deploy an Application from GitHub

    AWS EC2 Container Service (ECS)

    AWS EC2 Container Service (ECS) II

    AWS Hello World Lambda Function

    AWS Lambda Function Q & A

    AWS Node.js Lambda Function & API Gateway

    AWS API Gateway endpoint invoking Lambda function

    AWS API Gateway invoking Lambda function with Terraform

    AWS API Gateway invoking Lambda function with Terraform - Lambda Container

    Amazon Kinesis Streams

    Kinesis Data Firehose with Lambda and ElasticSearch

    Amazon DynamoDB

    Amazon DynamoDB with Lambda and CloudWatch

    Loading DynamoDB stream to AWS Elasticsearch service with Lambda

    Amazon ML (Machine Learning)

    Simple Systems Manager (SSM)

    AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

    AWS : RDS Importing and Exporting SQL Server Data

    AWS : RDS PostgreSQL & pgAdmin III

    AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

    AWS : MySQL Replication : Master-slave

    AWS : MySQL backup & restore

    AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

    AWS : Restoring Postgres on EC2 instance from S3 backup

    AWS : Q & A

    AWS : Security

    AWS : Security groups vs. network ACLs

    AWS : Scaling-Up

    AWS : Networking

    AWS : Single Sign-on (SSO) with Okta

    AWS : JIT (Just-in-Time) with Okta





    Powershell 4 Tutorial



    Powersehll : Introduction

    Powersehll : Help System

    Powersehll : Running commands

    Powersehll : Providers

    Powersehll : Pipeline

    Powersehll : Objects

    Powershell : Remote Control

    Windows Management Instrumentation (WMI)

    How to Enable Multiple RDP Sessions in Windows 2012 Server

    How to install and configure FTP server on IIS 8 in Windows 2012 Server

    How to Run Exe as a Service on Windows 2012 Server

    SQL Inner, Left, Right, and Outer Joins





    Git/GitHub Tutorial



    One page express tutorial for GIT and GitHub

    Installation

    add/status/log

    commit and diff

    git commit --amend

    Deleting and Renaming files

    Undoing Things : File Checkout & Unstaging

    Reverting commit

    Soft Reset - (git reset --soft <SHA key>)

    Mixed Reset - Default

    Hard Reset - (git reset --hard <SHA key>)

    Creating & switching Branches

    Fast-forward merge

    Rebase & Three-way merge

    Merge conflicts with a simple example

    GitHub Account and SSH

    Uploading to GitHub

    GUI

    Branching & Merging

    Merging conflicts

    GIT on Ubuntu and OS X - Focused on Branching

    Setting up a remote repository / pushing local project and cloning the remote repo

    Fork vs Clone, Origin vs Upstream

    Git/GitHub Terminologies

    Git/GitHub via SourceTree II : Branching & Merging

    Git/GitHub via SourceTree III : Git Work Flow

    Git/GitHub via SourceTree IV : Git Reset

    Git wiki - quick command reference






    Subversion

    Subversion Install On Ubuntu 14.04

    Subversion creating and accessing I

    Subversion creating and accessing II








    Contact

    BogoToBogo
    contactus@bogotobogo.com

    Follow Bogotobogo

    About Us

    contactus@bogotobogo.com

    YouTubeMy YouTube channel
    Pacific Ave, San Francisco, CA 94115

    Pacific Ave, San Francisco, CA 94115

    Copyright © 2024, bogotobogo
    Design: Web Master