BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Docker & Kubernetes - Helm Chart for Node/Express and MySQL with Ingress

Docker_Icon.png Helm_Icon.png




Bookmark and Share





bogotobogo.com site search:



Introduction

In this post, on Minikube, using Helm v.2, we'll deploy Nodejs and Express hello world app with Ingress and then later we will add MySQL to our Kubernetes cluster.

The codes are available https://github.com/Einsteinish/k8s-node-express-mysql-api-deploy-via-helm





Node/Express app

We'll build a simple "Hello, world" API using Node.js and Express. Let's make a new api directory, initialize npm, and then install express:

$ npm install -g yarn 
/usr/local/bin/yarnpkg -> /usr/local/lib/node_modules/yarn/bin/yarn.js
/usr/local/bin/yarn -> /usr/local/lib/node_modules/yarn/bin/yarn.js
+ yarn@1.22.5

$ mkdir api 

$ cd api 

$ yarn init 
yarn init v1.22.5
question name (api): 
question version (1.0.0): 
question description: 
question entry point (index.js): 
question repository url: 
question author: 
question license (MIT): 
question private: 
success Saved package.json
Done in 25.57s.

$ yarn add express forever   
yarn add v1.22.5
...

app.js:

async function main() {
  const express  = require('express');
  const app      = express();
  const port     = 3000;
  app.get('/', (req, res) => res.json({hello: 'world'}));
  await app.listen(port);
  console.log(`Listening on port ${port}.`);
}
main();    

If we run node app and then connect to http://localhost:3000 we should see a {"hello":"world"} response:

$ node app
Listening on port 3000.
localhost-3000.png

Dockerfile for the app

In the following Dockerfile, we copy the contents of the api folder into an image and installs npm dependencies. When the image is utilized, the command forever app.js will be issued, running the app on the exposed port, 3000:

FROM node:dubnium-alpine
WORKDIR /var/www/node/api
COPY ./ ./
RUN npm install -g yarn forever --force && \
  yarn install --production --force
USER node
EXPOSE 3000
CMD ["forever", "app.js"]

.dockerignore:

node_modules
npm-debug.log    

Let's build the image:

$ docker build -t node-express-api .
...
Successfully built 550e3bcfc98e
Successfully tagged node-express-api:latest

$ docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
node-express-api                          latest              550e3bcfc98e        15 minutes ago      138MB

Then tag and push the image to a container repository.

$ docker tag node-express-api:latest dockerbogo/node-express-api:1.0.0
    
$ docker login -u dockerbogo
Password: 
Login Succeeded
    
$ docker push dockerbogo/node-express-api:1.0.0    

dockerbogo-repo.png

Push the files into git:

$ git add api
$ git commit -m "initial commit"
$ git remote add origin https://github.com/Einsteinish/k8s-node-express-mysql-api-deploy-via-helm.git
$ git tag v1.0.0
$ git push origin master --tag    

So, the source is in here: https://github.com/Einsteinish/k8s-node-express-mysql-api-deploy-via-helm/releases/tag/v1.0.0







Minikube and Helm

We'll deploy our app to Minikube:

$ minikube start --vm=true
minikube v1.13.0 on Darwin 10.13.3
KUBECONFIG=/Users/kihyuckhong/.kube/config
sing the hyperkit driver based on existing profile
Starting control plane node minikube in cluster minikube
Updating the running hyperkit "minikube" VM ...
Preparing Kubernetes v1.19.0 on Docker 19.03.12 ...
Verifying Kubernetes components...
Verifying ingress addon...
Enabled addons: default-storageclass, ingress, storage-provisioner
Done! kubectl is now configured to use "minikube" by default

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/kihyuckhong/.minikube/ca.crt
    server: https://192.168.64.8:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /Users/kihyuckhong/.minikube/profiles/minikube/client.crt
    client-key: /Users/kihyuckhong/.minikube/profiles/minikube/client.key

kubernetes and its package manager, Helm, let us describe and version the state of a system in a Helm Chart, and then deploy that chart to a local (such as Minikube) or cloud-based server cluster.

Note that we'll use Helm v.2:

Helm-v2-v3.png
$ ls -l $(which helm)
lrwxr-xr-x  1 root  admin  30 Sep  6 14:37 /usr/local/bin/helm -> /usr/local/opt/helm@2/bin/helm

$ ls -l $(which helm3)
lrwxr-xr-x  1 kihyuckhong  admin  29 Sep  6 14:24 /usr/local/bin/helm3 -> ../Cellar/helm/3.3.1/bin/helm

$ helm version
Client: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}
Error: could not find tiller

We need to initialize Helm by adding the Helm server, Tiller, to our local cluster:

$ kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-258nf            1/1     Running   0          2m06s
etcd-minikube                      1/1     Running   0          2m10s
kube-apiserver-minikube            1/1     Running   0          2m10s
kube-controller-manager-minikube   1/1     Running   0          2m10s
kube-proxy-lzblz                   1/1     Running   0          2m10s
kube-scheduler-minikube            1/1     Running   0          2m10s
storage-provisioner                1/1     Running   0          2m10s

$ helm init
$HELM_HOME has been configured at /Users/kihyuckhong/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
...

$ kubectl get pods -n kube-system
NAME                               READY   STATUS              RESTARTS   AGE
coredns-f9fd979d6-258nf            1/1     Running   0          2m16s
etcd-minikube                      1/1     Running   0          2m20s
kube-apiserver-minikube            1/1     Running   0          2m20s
kube-controller-manager-minikube   1/1     Running   0          2m20s
kube-proxy-lzblz                   1/1     Running   0          2m16s
kube-scheduler-minikube            1/1     Running   0          2m20s
storage-provisioner                1/1     Running   0          2m20s
tiller-deploy-6fd89dcdc6-2tnrf     1/1     Running   0          85s

$ helm version
Client: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}




Create a Helm chart

Let's start with Helm's chart generator:

$ helm create api-deploy
Creating node-mysql

$ tree api-deploy
api-deploy
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│     └── test-connection.yaml
└── values.yaml

3 directories, 10 files

We need to define the values that Helm will pass to our templates such as the Docker image tag and the number of replicas. api-deploy/values.yaml:

replicaCount: 1

image:
  tag: latest    

api-deploy/Chart.yaml file has some information about the Helm chart, such as the chart's version, name, and description:

apiVersion: v1
version: 1.0.0
description: A Helm chart for the api.
name: api-deploy    

api-deploy/templates/deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
  labels:
    app.kubernetes.io/name: node-express-api
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Values.image.tag }}
    app.kubernetes.io/component: node-express-api
    app.kubernetes.io/part-of: node-express-api
    app.kubernetes.io/managed-by: helm2
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: node-express-api
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: node-express-api
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      containers:
      - name: node-express-api
        image: dockerbogo/node-express-api:{{ .Values.image.tag }}
        ports:
        - containerPort: 3000
          name: http
        readinessProbe:
          httpGet:
            path: /
            port: 3000
          initialDelaySeconds: 1
          periodSeconds: 1
        livenessProbe:
          httpGet:
            path: /
            port: 3000
          initialDelaySeconds: 1
          periodSeconds: 5    

The deployment defines a ReplicaSet. The number of pods created is defined by the spec.replicas, and the name of the resulting pods will be based on the name.

Note that some variables are used such as {{ .Values.image.tag }} which allows us to add dynamic content to manifest files. Note also that we're using our Docker image, dockerbogo/node-express-api, which exposes the API on port 3000.

Now we need to expose the API so that we can get external traffic into our cluster via a NodePort service type. This will make Kubernetes to allocate and assign a port to each node, and incoming traffic on that port will be proxied to our API service. We need to populate api/templates/service.yaml with the following:

kind: Service
apiVersion: v1
metadata:
  name: api-service
  labels:
    app.kubernetes.io/name: node-express-api
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Values.image.tag }}
    app.kubernetes.io/component: node-express-api
    app.kubernetes.io/part-of: node-express-api
    app.kubernetes.io/managed-by: helm2
spec:
  selector:
    app.kubernetes.io/name: node-express-api
    app.kubernetes.io/instance: {{ .Release.Name }}
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: NodePort    

The spec.selector defines which pods this NodePort should target, and spec.ports says that incoming traffic on port 80 should target port 3000 on the matched pods. Note that Kubernetes will assign a virtual IP to the service, known as a ClusterIP, and allocate a high-number port that each node will proxy to the service. That is, once deployed we'll be able to access the API externally using an IP and port that k8s assigns.


Update git:

$ git add api-deploy
$ git commit -m "helm chart"
[master 391cf0c] helm chart
 5 files changed, 95 insertions(+)
 create mode 100644 api-deploy/.helmignore
 create mode 100644 api-deploy/Chart.yaml
 create mode 100644 api-deploy/templates/deployment.yaml
 create mode 100644 api-deploy/templates/service.yaml
 create mode 100644 api-deploy/values.yaml
$ git tag v1.1.0
$ git push origin master --tag     






Install the chart - deploy the app

Now that our deployment and service defined, we can install the chart using Helm.

$ docker tag node-express-api:latest dockerbogo/node-express-api:1.1.0

$ docker push dockerbogo/node-express-api:1.1.0
The push refers to repository [docker.io/dockerbogo/node-express-api]    

Let's deploy it using the newly-created Docker image, 1.1.0, with 2 replicas:

$ helm install --name=node-express-api ./api-deploy --set replicaCount=2,image.tag=1.1.0    
NAME:   node-express-api
LAST DEPLOYED: Mon Sep  7 17:22:54 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME            READY  UP-TO-DATE  AVAILABLE  AGE
api-deployment  0/2    2           0          0s

==> v1/Pod(related)
NAME                            READY  STATUS             RESTARTS  AGE
api-deployment-c5ff48777-glmdk  0/1    ContainerCreating  0         0s
api-deployment-c5ff48777-w8885  0/1    ContainerCreating  0         0s

==> v1/Service
NAME         TYPE      CLUSTER-IP   EXTERNAL-IP  PORT(S)       AGE
api-service  NodePort  10.96.69.62  <none>       80:31886/TCP  0s

$ kubectl get all 
NAME                                 READY   STATUS    RESTARTS   AGE
pod/api-deployment-c5ff48777-glmdk   1/1     Running   0          83s
pod/api-deployment-c5ff48777-w8885   1/1     Running   0          83s

NAME                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/api-service   NodePort    10.96.69.62   <none>        80:31886/TCP   83s
service/kubernetes    ClusterIP   10.96.0.1     <none>        443/TCP        125m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/api-deployment   2/2     2            2           83s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/api-deployment-c5ff48777   2         2         2       83s

$ kubectl rollout status deployment.v1.apps/api-deployment 
deployment "api-deployment" successfully rolled out


Connecting to the service

Kubernetes allocates a high-value port that our nodes will proxy to our service:

$ kubectl get svc
NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
api-service   NodePort    10.96.69.62   <none>        80:31886/TCP   8m55s
kubernetes    ClusterIP   10.96.0.1     <none>        443/TCP        132m    

We can connect to the service via a browser by issuing the command minikube service api-service. The url is our cluster's IP (minikube ip) combined with the proxy port that Kubernetes allocated:

$ minikube service api-service
|-----------|-------------|-------------|-------------------------|
| NAMESPACE |    NAME     | TARGET PORT |           URL           |
|-----------|-------------|-------------|-------------------------|
| default   | api-service |          80 | http://172.17.0.3:31886 |
|-----------|-------------|-------------|-------------------------|
Starting tunnel for service api-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE |    NAME     | TARGET PORT |          URL           |
|-----------|-------------|-------------|------------------------|
| default   | api-service |             | http://127.0.0.1:50666 |
|-----------|-------------|-------------|------------------------|
Opening service default/api-service in default browser...
ecause you are using a Docker driver on darwin, the terminal needs to be open to run it.  
...
NodePortBrowser.png





Ingress

In the previous section, we were connecting to our API on ephemeral port. However, it's not desired and we should be able to connect to our API on port 80.

We'll use an ingress resource to manage external access. An ingress resource defines rules that allow inbound connections to access the services within the cluster. Those rules require an ingress controller to be running inside the cluster.

Minikube, by default, uses the ingress-nginx controller which is the one we're going to use. Since it's Minikube's default, we could use it implicitly by enabling the ingress addon:

$ kubectl get pod -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-t6sqt            1/1     Running   0          5h25m
etcd-minikube                      1/1     Running   0          5h25m
kube-apiserver-minikube            1/1     Running   0          5h25m
kube-controller-manager-minikube   1/1     Running   0          5h25m
kube-proxy-vx5g5                   1/1     Running   0          5h25m
kube-scheduler-minikube            1/1     Running   0          5h25m
storage-provisioner                1/1     Running   1          5h25m
tiller-deploy-6fd89dcdc6-k4ts8     1/1     Running   0          151m
 
$ minikube addons enable ingress
Verifying ingress addon...
The 'ingress' addon is enabled

$ kubectl get pod -n kube-system
NAME                                       READY   STATUS      RESTARTS   AGE
coredns-f9fd979d6-t6sqt                    1/1     Running     0          5h25m
etcd-minikube                              1/1     Running     0          5h25m
ingress-nginx-admission-create-7d8xp       0/1     Completed   0          9s
ingress-nginx-admission-patch-k5wfj        0/1     Completed   0          9s
ingress-nginx-controller-789d9c4dc-w4jnv   0/1     Running     0          9s
kube-apiserver-minikube                    1/1     Running     0          5h25m
kube-controller-manager-minikube           1/1     Running     0          5h25m
kube-proxy-vx5g5                           1/1     Running     0          5h25m
kube-scheduler-minikube                    1/1     Running     0          5h25m
storage-provisioner                        1/1     Running     1          5h25m
tiller-deploy-6fd89dcdc6-k4ts8             1/1     Running     0          151m    

Note:
In this section, we'll install additional ingress controller as a dependency so that when we later deploy to a cloud provider we can rest assured that the same controller will be used in all our environments. But I could not make it work and had to enable the ingress-nginx-controller native to Minikube.


Well, still we'll move on and our Helm chart will have dependencies, nginx-ingress.

Actually, Helm has two ways of handle the dependency: subcharts in charts folder or add a requirements.yaml file and declare dependencies. We'll choose the latter in this post.

api-deploy/requirements.yaml:

dependencies:
- name: nginx-ingress
  version: 1.41.2
  repository: https://kubernetes-charts.storage.googleapis.com    

Note that the repository url is the "stable" Helm repository, found by executing helm repo list:

$ helm repo list
NAME  	URL                                             
stable	https://kubernetes-charts.storage.googleapis.com
local 	http://127.0.0.1:8879/charts   

$ helm search stable/nginx
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                                                 
stable/nginx-ingress       	1.41.2       	v0.34.1    	An nginx Ingress controller that uses ConfigMap to store ...
stable/nginx-ldapauth-proxy	0.1.4        	1.13.5     	nginx proxy with ldapauth                                   
stable/nginx-lego          	0.3.1        	           	Chart for nginx-ingress-controller and kube-lego  

Now we need to define ingress rules, specifying where to route incoming traffic. Our routing rules are simple — route all HTTP traffic to our API service.

Note that the nginx-ingress controller provides capabilities such as routing traffic to different services based on path (layer-7 routing), adding sticky sessions, doing TLS termination, etc.


api-deploy/templates/ingress.yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: api-ingress
  labels:
    app.kubernetes.io/name: node-express-api
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Values.image.tag }}
    app.kubernetes.io/component: node-express-api
    app.kubernetes.io/part-of: node-express-api
    app.kubernetes.io/managed-by: helm2
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    ingress.kubernetes.io/hsts: "false"
spec:
  rules:
  - host: api.info
    http:
      paths:
      - path: /
        backend:
          serviceName: api-service
          servicePort: 80

The kubernetes.io/ingress-class annotation says explicitly that we'll be using the nginx controller. Note that we've disabled nginx.ingress.kubernetes.io/ssl-redirect and ingress.kubernetes.io/hsts because we don't want to redirect traffic to HTTPS or use HTTP strict transport security (HTST) but just HTTP. Note also that the nginx controller requires annotation values to be strings, that's why false is quoted.

Regarding the rule specifications, we're simply sending all traffic to our API backend on port 80, per our service definition.

Now, let's update the chart's dependencies so that nginx-ingress is pulled in using helm dep update. It updates charts/ based on the contents of requirements.yaml.

$ tree api-deploy/
api-deploy/
├── Chart.yaml
├── requirements.yaml
├── templates
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── service.yaml
└── values.yaml

$ kubectl get all
NAME                                 READY   STATUS    RESTARTS   AGE
pod/api-deployment-c5ff48777-49nss   1/1     Running   0          6m51s
pod/api-deployment-c5ff48777-bsl9d   1/1     Running   0          6m51s

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/api-service   NodePort    10.103.38.179   <none>        80:30735/TCP   6m51s
service/kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP        6h39m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/api-deployment   2/2     2            2           6m51s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/api-deployment-c5ff48777   2         2         2       6m51s
    
$ helm dep update ./api-deploy
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
	Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading nginx-ingress from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

$ tree api-deploy/
api-deploy/
├── Chart.yaml
├── charts
│   └── nginx-ingress-1.41.2.tgz
├── requirements.lock
├── requirements.yaml
├── templates
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── service.yaml
└── values.yaml

$ kubectl get all
NAME                                 READY   STATUS    RESTARTS   AGE
pod/api-deployment-c5ff48777-49nss   1/1     Running   0          10m
pod/api-deployment-c5ff48777-bsl9d   1/1     Running   0          10m

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/api-service   NodePort    10.103.38.179           80:30735/TCP   10m
service/kubernetes    ClusterIP   10.96.0.1               443/TCP        6h42m

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/api-deployment   2/2     2            2           10m

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/api-deployment-c5ff48777   2         2         2       10m

List the releases:

$ helm list
NAME            	REVISION	UPDATED                 	STATUS  	CHART           	APP VERSION	NAMESPACE
node-express-api	1       	Mon Sep  7 17:22:54 2020	DEPLOYED	api-deploy-1.0.0	           	default      

Now, we want to upgrade our release using helm upgrade command:

helm upgrade <release name> <chart path> 


The command will prompt Tiller to install the new dependency:

$ helm upgrade node-express-api ./api-deploy  
Release "node-express-api" has been upgraded.
LAST DEPLOYED: Mon Sep  7 23:42:41 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-08T06:42:42Z

==> v1/ClusterRoleBinding
NAME                            ROLE                                        AGE
node-express-api-nginx-ingress  ClusterRole/node-express-api-nginx-ingress  1s

==> v1/Deployment
NAME                                            READY  UP-TO-DATE  AVAILABLE  AGE
api-deployment                                  2/2    2           2          5m17s
node-express-api-nginx-ingress-controller       0/1    1           0          1s
node-express-api-nginx-ingress-default-backend  0/1    1           0          1s

==> v1/Ingress
NAME         CLASS   HOSTS  ADDRESS  PORTS  AGE
api-ingress    *      80       1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
api-deployment-c5ff48777-rmtv5                                   1/1    Running            0         5m17s
api-deployment-c5ff48777-t2fqv                                   1/1    Running            0         5m17s
node-express-api-nginx-ingress-controller-864d8ffcf9-nj4v8       0/1    ContainerCreating  0         1s
node-express-api-nginx-ingress-default-backend-6c7b8c7474-gf4k4  0/1    ContainerCreating  0         1s
node-express-api-nginx-ingress-controller-864d8ffcf9-nj4v8       0/1    ContainerCreating  0         1s
node-express-api-nginx-ingress-default-backend-6c7b8c7474-gf4k4  0/1    ContainerCreating  0         1s

==> v1/Role
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-08T06:42:42Z

==> v1/RoleBinding
NAME                            ROLE                                 AGE
node-express-api-nginx-ingress  Role/node-express-api-nginx-ingress  1s

==> v1/Service
NAME                                            TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)                     AGE
api-service                                     NodePort      10.100.2.249  <none>       80:32174/TCP                5m17s
node-express-api-nginx-ingress-controller       LoadBalancer  10.100.90.89  <pending>    80:31980/TCP,443:32765/TCP  1s
node-express-api-nginx-ingress-default-backend  ClusterIP     10.96.246.51  <none>       80/TCP                      1s

==> v1/ServiceAccount
NAME                                    SECRETS  AGE
node-express-api-nginx-ingress          1        1s
node-express-api-nginx-ingress-backend  1        1s

$ kubectl get all 
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/api-deployment-c5ff48777-rmtv5                                    1/1     Running   0          6m58s
pod/api-deployment-c5ff48777-t2fqv                                    1/1     Running   0          6m58s
pod/node-express-api-nginx-ingress-controller-864d8ffcf9-nj4v8        1/1     Running   0          102s
pod/node-express-api-nginx-ingress-default-backend-6c7b8c7474-gf4k4   1/1     Running   0          102s

NAME                                                     TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/api-service                                      NodePort       10.100.2.249   <none>        80:32174/TCP                 6m58s
service/kubernetes                                       ClusterIP      10.96.0.1      <none>        443/TCP                      8h
service/node-express-api-nginx-ingress-controller        LoadBalancer   10.100.90.89   <pending>     80:31980/TCP,443:32765/TCP   102s
service/node-express-api-nginx-ingress-default-backend   ClusterIP      10.96.246.51   <none>        80/TCP                       102s

NAME                                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/api-deployment                                   2/2     2            2           6m58s
deployment.apps/node-express-api-nginx-ingress-controller        1/1     1            1           102s
deployment.apps/node-express-api-nginx-ingress-default-backend   1/1     1            1           102s

NAME                                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/api-deployment-c5ff48777                                    2         2         2       6m58s
replicaset.apps/node-express-api-nginx-ingress-controller-864d8ffcf9        1         1         1       102s
replicaset.apps/node-express-api-nginx-ingress-default-backend-6c7b8c7474   1         1         1       102s

Now we can see a few new services for the nginx-controller. One is the nginx-ingress-controller itself, and the other one is a nginx-ingress-default-backend. The default backend is used to serve requests if no backend service can be reached, or if a request to an unknown path is requested.

$ minikube ip
192.168.64.8

$ cat /etc/hosts
...
192.168.64.8	api.info

Now, we can get the response on the browser using the hostname api.info defined in /etc/hosts:

api-info-ingress-browser.png

As mentioned earlier in this section, it appears we are not able to use the controller we deployed into the default namespace. So, note that we still need the controller in kube-system namespace:

$ kubectl get pod -n kube-system
NAME                                       READY   STATUS      RESTARTS   AGE
coredns-f9fd979d6-t6sqt                    1/1     Running     0          6h13m
etcd-minikube                              1/1     Running     0          6h13m
ingress-nginx-admission-create-7d8xp       0/1     Completed   0          48m
ingress-nginx-admission-patch-k5wfj        0/1     Completed   0          48m
ingress-nginx-controller-789d9c4dc-w4jnv   1/1     Running     0          48m
kube-apiserver-minikube                    1/1     Running     0          6h13m
kube-controller-manager-minikube           1/1     Running     0          6h13m
kube-proxy-vx5g5                           1/1     Running     0          6h13m
kube-scheduler-minikube                    1/1     Running     0          6h13m
storage-provisioner                        1/1     Running     1          6h13m
tiller-deploy-6fd89dcdc6-k4ts8             1/1     Running     0          3h19m    

The default ingress-nginx-controller configuration watches Ingress object from all the namespaces (Installation Guide).



Push the new files into git:

$ git add api-deploy
$ git commit -m "adding ingress and its dependency"
$ git tag v2.0.0
$ git push origin master --tag    






MySQL

We'll add a MySQL dependency to our application. But deploying MySQL to Kubernetes cluster requires daunting tasks of setting up the following:

  1. a stateful set to define a new pod
  2. a new service definition to allow communication with the database
  3. a persistent volume to manage non-volatile storage
  4. a persistent volume claim for the new pod

Here comes Helm to do a heavy lifting for us.

The following api-deploy/requirements.yaml declares MySQL as a dependency of our api chart,

dependencies:
- name: nginx-ingress
  version: 1.41.2
  repository: https://kubernetes-charts.storage.googleapis.com
- name: mysql
  version: 1.6.7
  repository: https://kubernetes-charts.storage.googleapis.com    

Here is our charts:

$ tree api-deploy/
api-deploy/
├── Chart.yaml
├── charts
│   └── nginx-ingress-1.41.2.tgz
├── requirements.lock
├── requirements.yaml
├── templates
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── service.yaml
└── values.yaml

Update our chart's dependencies:

$ helm dep update ./api-deploy 
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
	Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 2 charts
Downloading nginx-ingress from repo https://kubernetes-charts.storage.googleapis.com
Downloading mysql from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

$ tree api-deploy/ 
api-deploy/
├── Chart.yaml
├── charts
│   ├── mysql-1.6.7.tgz
│   └── nginx-ingress-1.41.2.tgz
├── requirements.lock
├── requirements.yaml
├── templates
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── service.yaml
└── values.yaml

Create secrets using a kubectl create secret command:

$ echo -n "passw0rd" > mysql-password
$ echo -n "rootpassw0rd" > mysql-root-password
$ kubectl create secret generic api-db-pass --from-file=./mysql-root-password --from-file=./mysql-password
secret/api-db-pass created 

$ kubectl get secrets
NAME                                                 TYPE                                  DATA   AGE
api-db-pass                                          Opaque                                2      77s
default-token-s6tss                                  kubernetes.io/service-account-token   3      19h
...

api-deploy/values.yaml:

# api
replicaCount: 1
image:
  tag: latest

# mysql
mysql:
  mysqlDatabase: api_db
  mysqlUser: mysql-user
  existingSecret: api-db-pass
  initializationFiles:
    1-create-table-users.sql: |-
      CREATE TABLE users (
        userID INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
        firstName VARCHAR(255),
        lastName VARCHAR(255),
        createdOn TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP);
    2-add-dummy-users.sql: |-
      INSERT INTO users (firstName, lastName) VALUES ('Noam', 'Chomsky');
      INSERT INTO users (firstName, lastName) VALUES ('Elon', 'Musk');
      INSERT INTO users (firstName, lastName) VALUES ('Jeff', 'Bezos');
      INSERT INTO users (firstName, lastName) VALUES ('James', 'Watson');   

Upgrade:

$ helm list
NAME            	REVISION	UPDATED                 	STATUS  	CHART           	APP VERSION	NAMESPACE
node-express-api	1       	Tue Sep  8 20:01:07 2020	DEPLOYED	api-deploy-1.0.0	           	default 

$ helm upgrade node-express-api ./api-deploy 
Release "node-express-api" has been upgraded.
LAST DEPLOYED: Wed Sep  9 12:42:58 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-09T03:01:07Z

==> v1/ClusterRoleBinding
NAME                            ROLE                                        AGE
node-express-api-nginx-ingress  ClusterRole/node-express-api-nginx-ingress  16h

==> v1/ConfigMap
NAME                                   DATA  AGE
node-express-api-mysql-initialization  2     2s
node-express-api-mysql-test            1     2s

==> v1/Deployment
NAME                                            READY  UP-TO-DATE  AVAILABLE  AGE
api-deployment                                  2/2    2           2          16h
node-express-api-mysql                          0/1    1           0          2s
node-express-api-nginx-ingress-controller       1/1    1           1          16h
node-express-api-nginx-ingress-default-backend  1/1    1           1          16h

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
node-express-api-mysql  Bound   pvc-49df3888-5ebc-4763-9a3c-8332967cfb75  8Gi       RWO           standard      2s

==> v1/Pod(related)
NAME                                                             READY  STATUS    RESTARTS  AGE
api-deployment-c5ff48777-82lnn                                   1/1    Running   0         16h
api-deployment-c5ff48777-s7tf9                                   1/1    Running   0         16h
node-express-api-mysql-5c7cc9b6d4-hn75d                          0/1    Init:0/1  0         1s
node-express-api-nginx-ingress-controller-864d8ffcf9-tz9bh       1/1    Running   0         16h
node-express-api-nginx-ingress-default-backend-6c7b8c7474-7dsdc  1/1    Running   0         16h
node-express-api-nginx-ingress-controller-864d8ffcf9-tz9bh       1/1    Running   0         16h
node-express-api-nginx-ingress-default-backend-6c7b8c7474-7dsdc  1/1    Running   0         16h

==> v1/Role
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-09T03:01:07Z

==> v1/RoleBinding
NAME                            ROLE                                 AGE
node-express-api-nginx-ingress  Role/node-express-api-nginx-ingress  16h

==> v1/Service
NAME                                            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
api-service                                     NodePort      10.111.140.145  <none>       80:32308/TCP                16h
node-express-api-mysql                          ClusterIP     10.104.96.143   <none>       3306/TCP                    2s
node-express-api-nginx-ingress-controller       LoadBalancer  10.98.110.15    <pending>    80:30118/TCP,443:30714/TCP  16h
node-express-api-nginx-ingress-default-backend  ClusterIP     10.111.143.212  <none>       80/TCP                      16h

==> v1/ServiceAccount
NAME                                    SECRETS  AGE
node-express-api-nginx-ingress          1        16h
node-express-api-nginx-ingress-backend  1        16h

==> v1beta1/Ingress
NAME         CLASS   HOSTS     ADDRESS       PORTS  AGE
api-ingress  <none>  api.info  192.168.64.8  80     16h

$ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
api-deployment-c5ff48777-82lnn                                    1/1     Running   0          16h
api-deployment-c5ff48777-s7tf9                                    1/1     Running   0          16h
node-express-api-mysql-5c7cc9b6d4-hn75d                           1/1     Running   0          2m21s
node-express-api-nginx-ingress-controller-864d8ffcf9-tz9bh        1/1     Running   0          16h
node-express-api-nginx-ingress-default-backend-6c7b8c7474-7dsdc   1/1     Running   0          16h

We can go into MySQL container, check the env variables set and see tables created in api_db:

$ kubectl exec -it node-express-api-mysql-5c7cc9b6d4-hn75d -- bash
root@node-express-api-mysql-5c7cc9b6d4-hn75d:/# env |grep MYSQL
MYSQL_MAJOR=5.7
NODE_EXPRESS_API_MYSQL_PORT_3306_TCP=tcp://10.104.96.143:3306
NODE_EXPRESS_API_MYSQL_SERVICE_PORT_MYSQL=3306
NODE_EXPRESS_API_MYSQL_PORT_3306_TCP_PORT=3306
NODE_EXPRESS_API_MYSQL_PORT_3306_TCP_PROTO=tcp
MYSQL_ROOT_PASSWORD=rootpassw0rd
MYSQL_PASSWORD=passw0rd
MYSQL_USER=mysql-user
MYSQL_VERSION=5.7.30-1debian10
NODE_EXPRESS_API_MYSQL_SERVICE_HOST=10.104.96.143
MYSQL_DATABASE=api_db
NODE_EXPRESS_API_MYSQL_PORT_3306_TCP_ADDR=10.104.96.143
NODE_EXPRESS_API_MYSQL_SERVICE_PORT=3306
NODE_EXPRESS_API_MYSQL_PORT=tcp://10.104.96.143:3306

root@node-express-api-mysql-5c7cc9b6d4-hn75d:/# mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} ${MYSQL_DATABASE}
mysql: [Warning] Using a password on the command line interface can be insecure.
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 237
Server version: 5.7.30 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> 
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| api_db             |
+--------------------+
2 rows in set (0.00 sec)

mysql> use api_db;
Database changed

mysql> show tables;
+------------------+
| Tables_in_api_db |
+------------------+
| users            |
+------------------+
1 row in set (0.00 sec)

mysql> select * from users;
+--------+-----------+----------+---------------------+
| userID | firstName | lastName | createdOn           |
+--------+-----------+----------+---------------------+
|      1 | Noam      | Chomsky  | 2020-09-09 19:44:14 |
|      2 | Elon      | Musk     | 2020-09-09 19:44:14 |
|      3 | Jeff      | Bezos    | 2020-09-09 19:44:14 |
|      4 | James     | Watson   | 2020-09-09 19:44:14 |
+--------+-----------+----------+---------------------+
4 rows in set (0.01 sec)

mysql> exit
Bye

root@node-express-api-mysql-5c7cc9b6d4-hn75d:/# exit
exit
$






Connecting MySQL with Node Express API

Now that the database is ready, we'’ll're going to connect our API to the database.

We need to add a MySQL driver to our JavaScript code:

$ yarn add mysql2
yarn add v1.22.5
...

Let's update our api/app.js file so that it returns all the users when users' route is hit:

async function main() {
  const mysql    = require('mysql2/promise');
  const express  = require('express');
  const app      = express();
  const port     = 3000;
  const connOpts = {
    host     : process.env.DB_HOST,
    user     : process.env.DB_USER,
    password : process.env.DB_PASSWORD,
    database : process.env.DB_DATABASE
  };
  const conn = await mysql.createConnection(connOpts);

  // Close the connection on ctrl+c.
  process.on('SIGINT', close);

  // Hello, world route.
  app.get('/', (req, res) => res.json({hello: 'world'}));

  // Get all users route.
  app.get('/users', getAllUsers);

  app.listen(port);
  console.log(`Listening on port ${port}.`);
  
  async function close() {
    await conn.end();
    process.exit(0);
  }

  async function getAllUsers(req, res) {
    const [users] = await conn.query(`
      SELECT  u.userID, u.firstName, u.lastName
      FROM    users u`);
    res.json(users);
  }
}

main();    

Note that added lines of code to open and close the database connection, and a new GET /users route. The code should get the MySQL host, user, database, and password from environment variables. So, our api-deploy/templates/deployment.yaml file needs to be updated to provide the variables to our aode-express-api container.

The api-deploy/templates/deployment.yaml file looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
  labels:
    app.kubernetes.io/name: node-express-api
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Values.image.tag }}
    app.kubernetes.io/component: node-express-api
    app.kubernetes.io/part-of: node-express-api
    app.kubernetes.io/managed-by: helm2
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: node-express-api
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: node-express-api
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      containers:
      - name: node-express-api
        image: dockerbogo/node-express-api:{{ .Values.image.tag }}
        ports:
        - containerPort: 3000
          name: http
        readinessProbe:
          httpGet:
            path: /
            port: 3000
          initialDelaySeconds: 1
          periodSeconds: 1
        livenessProbe:
          httpGet:
            path: /
            port: 3000
          initialDelaySeconds: 1
          periodSeconds: 5
        env:
        - name: DB_HOST
          value: {{ .Release.Name }}-mysql
        - name: DB_USER
          value: mysql-user
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: api-db-pass
              key: mysql-password
        - name: DB_DATABASE
          value: api_db    

Build, tag, and push the new app image to our DockerHub:

$ docker build -t node-express-api .
...
Successfully built 11c5fa202757
Successfully tagged node-express-api:latest

$ docker images
REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
node-express-api                                          latest              11c5fa202757        23 seconds ago      138MB

$ docker tag node-express-api:latest dockerbogo/node-express-api:3.0.1
    
$ docker login -u dockerbogo
Password: 
Login Succeeded
    
$ docker push dockerbogo/node-express-api:3.0.1 
The push refers to repository [docker.io/dockerbogo/node-express-api]
...

We can check the image if it's ok:

$ docker pull dockerbogo/node-express-api:3.0.1

$ docker run -it dockerbogo/node-express-api:3.0.1 sh

/var/www/node/api $ ls
Dockerfile    README.md     app.js        node_modules  package.json  yarn.lock

/var/www/node/api $ exit 
$


Upgrade our deploy using helm upgrade:

$ helm upgrade node-express-api ./api-deploy --set replicaCount=2,image.tag=3.0.1 
Release "node-express-api" has been upgraded.
LAST DEPLOYED: Wed Sep  9 17:23:21 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-09T03:01:07Z

==> v1/ClusterRoleBinding
NAME                            ROLE                                        AGE
node-express-api-nginx-ingress  ClusterRole/node-express-api-nginx-ingress  21h

==> v1/ConfigMap
NAME                                   DATA  AGE
node-express-api-mysql-initialization  2     4h40m
node-express-api-mysql-test            1     4h40m

==> v1/Deployment
NAME                                            READY  UP-TO-DATE  AVAILABLE  AGE
api-deployment                                  2/3    1           2          21h
node-express-api-mysql                          1/1    1           1          4h40m
node-express-api-nginx-ingress-controller       1/1    1           1          21h
node-express-api-nginx-ingress-default-backend  1/1    1           1          21h

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
node-express-api-mysql  Bound   pvc-49df3888-5ebc-4763-9a3c-8332967cfb75  8Gi       RWO           standard      4h40m

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
api-deployment-74cc69b6c4-sjshw                                  0/1    ContainerCreating  0         3s
api-deployment-c5ff48777-gknpv                                   0/1    ContainerCreating  0         4s
node-express-api-mysql-5c7cc9b6d4-hn75d                          1/1    Running            0         4h40m
node-express-api-nginx-ingress-controller-864d8ffcf9-tz9bh       1/1    Running            0         21h
node-express-api-nginx-ingress-default-backend-6c7b8c7474-7dsdc  1/1    Running            0         21h
node-express-api-nginx-ingress-controller-864d8ffcf9-tz9bh       1/1    Running            0         21h
node-express-api-nginx-ingress-default-backend-6c7b8c7474-7dsdc  1/1    Running            0         21h

==> v1/Role
NAME                            CREATED AT
node-express-api-nginx-ingress  2020-09-09T03:01:07Z

==> v1/RoleBinding
NAME                            ROLE                                 AGE
node-express-api-nginx-ingress  Role/node-express-api-nginx-ingress  21h

==> v1/Service
NAME                                            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
api-service                                     NodePort      10.111.140.145  <none>       80:32308/TCP                21h
node-express-api-mysql                          ClusterIP     10.104.96.143   <none>       3306/TCP                    4h40m
node-express-api-nginx-ingress-controller       LoadBalancer  10.98.110.15    <pending>    80:30118/TCP,443:30714/TCP  21h
node-express-api-nginx-ingress-default-backend  ClusterIP     10.111.143.212  <none>       80/TCP                      21h

==> v1/ServiceAccount
NAME                                    SECRETS  AGE
node-express-api-nginx-ingress          1        21h
node-express-api-nginx-ingress-backend  1        21h

==> v1beta1/Ingress
NAME         CLASS   HOSTS     ADDRESS       PORTS  AGE
api-ingress  <<none>  api.info  192.168.64.8  80     21h

Check the pods/services/deployments are running:

$ kubectl get all
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/api-deployment-57f796fdbf-frlnv                                   1/1     Running   0          50m
pod/api-deployment-57f796fdbf-xjxdm                                   1/1     Running   0          50m
pod/node-express-api-mysql-5c7cc9b6d4-lwcdp                           1/1     Running   0          50m
pod/node-express-api-nginx-ingress-controller-864d8ffcf9-d4xf7        1/1     Running   0          50m
pod/node-express-api-nginx-ingress-default-backend-6c7b8c7474-s7mtn   1/1     Running   0          50m

NAME                                                     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/api-service                                      NodePort       10.106.176.23   <none>        80:30422/TCP                 50m
service/kubernetes                                       ClusterIP      10.96.0.1       <none>        443/TCP                      44h
service/node-express-api-mysql                           ClusterIP      10.105.40.149   <none>        3306/TCP                     50m
service/node-express-api-nginx-ingress-controller        LoadBalancer   10.100.85.203   <pending>     80:30452/TCP,443:32122/TCP   50m
service/node-express-api-nginx-ingress-default-backend   ClusterIP      10.107.83.210   <none>        80/TCP                       50m

NAME                                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/api-deployment                                   2/2     2            2           50m
deployment.apps/node-express-api-mysql                           1/1     1            1           50m
deployment.apps/node-express-api-nginx-ingress-controller        1/1     1            1           50m
deployment.apps/node-express-api-nginx-ingress-default-backend   1/1     1            1           50m

NAME                                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/api-deployment-57f796fdbf                                   2         2         2       50m
replicaset.apps/node-express-api-mysql-5c7cc9b6d4                           1         1         1       50m
replicaset.apps/node-express-api-nginx-ingress-controller-864d8ffcf9        1         1         1       50m
replicaset.apps/node-express-api-nginx-ingress-default-backend-6c7b8c7474   1         1         1       50m


Everything appears to be ok and we should be able to hit the new route /users route via GET. Then, we get back the users we created earlier.

api-info-users.png

At this point, the codes should look like this: https://github.com/Einsteinish/k8s-node-express-mysql-api-deploy-via-helm/releases/tag/v3.0.1


Push the files into git:

$ git add api api-deploy
$ git commit -m "added connection to MySQL with app"
$ git tag v3.0.1
$ git push origin master --tag    






Cleanup

Delete and purge:

$ helm list
NAME            	REVISION	UPDATED                 	STATUS  	CHART           	APP VERSION	NAMESPACE
node-express-api	3       	Thu Sep 10 12:13:22 2020	DEPLOYED	api-deploy-1.0.0	           	default  

$ helm del --purge node-express-api 
release "node-express-api" deleted







Refs
  1. https://github.com/helm/charts/blob/master/stable/mysql/templates/NOTES.txt

Docker & K8s

  1. Docker install on Amazon Linux AMI
  2. Docker install on EC2 Ubuntu 14.04
  3. Docker container vs Virtual Machine
  4. Docker install on Ubuntu 14.04
  5. Docker Hello World Application
  6. Nginx image - share/copy files, Dockerfile
  7. Working with Docker images : brief introduction
  8. Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
  9. More on docker run command (docker run -it, docker run --rm, etc.)
  10. Docker Networks - Bridge Driver Network
  11. Docker Persistent Storage
  12. File sharing between host and container (docker run -d -p -v)
  13. Linking containers and volume for datastore
  14. Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
  15. Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
  16. Dockerfile - Build Docker images automatically III - RUN
  17. Dockerfile - Build Docker images automatically IV - CMD
  18. Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
  19. Docker - Apache Tomcat
  20. Docker - NodeJS
  21. Docker - NodeJS with hostname
  22. Docker Compose - NodeJS with MongoDB
  23. Docker - Prometheus and Grafana with Docker-compose
  24. Docker - StatsD/Graphite/Grafana
  25. Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
  26. Docker : NodeJS with GCP Kubernetes Engine
  27. Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
  28. Docker : Jenkins Master and Slave
  29. Docker - ELK : ElasticSearch, Logstash, and Kibana
  30. Docker - ELK 7.6 : Elasticsearch on Centos 7
  31. Docker - ELK 7.6 : Filebeat on Centos 7
  32. Docker - ELK 7.6 : Logstash on Centos 7
  33. Docker - ELK 7.6 : Kibana on Centos 7
  34. Docker - ELK 7.6 : Elastic Stack with Docker Compose
  35. Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
  36. Docker - Deploy Elastic Stack via Helm on minikube
  37. Docker Compose - A gentle introduction with WordPress
  38. Docker Compose - MySQL
  39. MEAN Stack app on Docker containers : micro services
  40. MEAN Stack app on Docker containers : micro services via docker-compose
  41. Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
  42. Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
  43. Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
  44. Docker Compose with two containers - Flask REST API service container and an Apache server container
  45. Docker compose : Nginx reverse proxy with multiple containers
  46. Docker & Kubernetes : Envoy - Getting started
  47. Docker & Kubernetes : Envoy - Front Proxy
  48. Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
  49. Docker Packer
  50. Docker Cheat Sheet
  51. Docker Q & A #1
  52. Kubernetes Q & A - Part I
  53. Kubernetes Q & A - Part II
  54. Docker - Run a React app in a docker
  55. Docker - Run a React app in a docker II (snapshot app with nginx)
  56. Docker - NodeJS and MySQL app with React in a docker
  57. Docker - Step by Step NodeJS and MySQL app with React - I
  58. Installing LAMP via puppet on Docker
  59. Docker install via Puppet
  60. Nginx Docker install via Ansible
  61. Apache Hadoop CDH 5.8 Install with QuickStarts Docker
  62. Docker - Deploying Flask app to ECS
  63. Docker Compose - Deploying WordPress to AWS
  64. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
  65. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
  66. Docker - ECS Fargate
  67. Docker - AWS ECS service discovery with Flask and Redis
  68. Docker & Kubernetes : minikube
  69. Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
  70. Docker & Kubernetes 3 : minikube Django with Redis and Celery
  71. Docker & Kubernetes 4 : Django with RDS via AWS Kops
  72. Docker & Kubernetes : Kops on AWS
  73. Docker & Kubernetes : Ingress controller on AWS with Kops
  74. Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
  75. Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
  76. Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
  77. Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
  78. Docker & Kubernetes : DaemonSet
  79. Docker & Kubernetes : Secrets
  80. Docker & Kubernetes : kubectl command
  81. Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
  82. Docker & Kubernetes : Configure a Pod to Use a ConfigMap
  83. AWS : EKS (Elastic Container Service for Kubernetes)
  84. Docker & Kubernetes : Run a React app in a minikube
  85. Docker & Kubernetes : Minikube install on AWS EC2
  86. Docker & Kubernetes : Cassandra with a StatefulSet
  87. Docker & Kubernetes : Terraform and AWS EKS
  88. Docker & Kubernetes : Pods and Service definitions
  89. Docker & Kubernetes : Service IP and the Service Type
  90. Docker & Kubernetes : Kubernetes DNS with Pods and Services
  91. Docker & Kubernetes : Headless service and discovering pods
  92. Docker & Kubernetes : Scaling and Updating application
  93. Docker & Kubernetes : Horizontal pod autoscaler on minikubes
  94. Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
  95. Docker & Kubernetes : Rolling updates
  96. Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
  97. Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
  98. Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
  99. Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
  100. Docker & Kubernetes : MongoDB / MongoExpress on Minikube
  101. Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
  102. Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
  103. Docker & Kubernetes : Nginx Ingress Controller on Minikube
  104. Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
  105. Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
  106. Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
  107. Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
  108. Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
  109. Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
  110. Docker & Kubernetes : StatefulSets on minikube
  111. Docker & Kubernetes : RBAC
  112. Docker & Kubernetes Service Account, RBAC, and IAM
  113. Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
  114. Docker & Kubernetes : Helm Chart
  115. Docker & Kubernetes : My first Helm deploy
  116. Docker & Kubernetes : Readiness and Liveness Probes
  117. Docker & Kubernetes : Helm chart repository with Github pages
  118. Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
  119. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
  120. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
  121. Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
  122. Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
  123. Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
  124. Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
  125. Docker & Kubernetes : Istio on EKS
  126. Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
  127. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
  128. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
  129. Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
  130. Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
  131. Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
  132. Docker & Kubernetes : Spinnaker on EKS with Halyard
  133. Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
  134. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
  135. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
  136. Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
  137. Docker & Kubernetes : Jenkins-X on EKS
  138. Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
  139. Docker & Kubernetes : ArgoCD on Kubernetes cluster
  140. Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook



Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Docker & K8s



Docker install on Amazon Linux AMI

Docker install on EC2 Ubuntu 14.04

Docker container vs Virtual Machine

Docker install on Ubuntu 14.04

Docker Hello World Application

Nginx image - share/copy files, Dockerfile

Working with Docker images : brief introduction

Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

More on docker run command (docker run -it, docker run --rm, etc.)

Docker Networks - Bridge Driver Network

Docker Persistent Storage

File sharing between host and container (docker run -d -p -v)

Linking containers and volume for datastore

Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

Dockerfile - Build Docker images automatically III - RUN

Dockerfile - Build Docker images automatically IV - CMD

Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

Docker - Apache Tomcat

Docker - NodeJS

Docker - NodeJS with hostname

Docker Compose - NodeJS with MongoDB

Docker - Prometheus and Grafana with Docker-compose

Docker - StatsD/Graphite/Grafana

Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

Docker : NodeJS with GCP Kubernetes Engine

Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

Docker : Jenkins Master and Slave

Docker - ELK : ElasticSearch, Logstash, and Kibana

Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

Docker - ELK 7.6 : Logstash on Centos 7

Docker - ELK 7.6 : Kibana on Centos 7 Part 1

Docker - ELK 7.6 : Kibana on Centos 7 Part 2

Docker - ELK 7.6 : Elastic Stack with Docker Compose

Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

Docker - Deploy Elastic Stack via Helm on minikube

Docker Compose - A gentle introduction with WordPress

Docker Compose - MySQL

MEAN Stack app on Docker containers : micro services

Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

Docker Compose with two containers - Flask REST API service container and an Apache server container

Docker compose : Nginx reverse proxy with multiple containers

Docker compose : Nginx reverse proxy with multiple containers

Docker & Kubernetes : Envoy - Getting started

Docker & Kubernetes : Envoy - Front Proxy

Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

Docker Packer

Docker Cheat Sheet

Docker Q & A

Kubernetes Q & A - Part I

Kubernetes Q & A - Part II

Docker - Run a React app in a docker

Docker - Run a React app in a docker II (snapshot app with nginx)

Docker - NodeJS and MySQL app with React in a docker

Docker - Step by Step NodeJS and MySQL app with React - I

Installing LAMP via puppet on Docker

Docker install via Puppet

Nginx Docker install via Ansible

Apache Hadoop CDH 5.8 Install with QuickStarts Docker

Docker - Deploying Flask app to ECS

Docker Compose - Deploying WordPress to AWS

Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

Docker - ECS Fargate

Docker - AWS ECS service discovery with Flask and Redis

Docker & Kubernetes: minikube version: v1.31.2, 2023

Docker & Kubernetes 1 : minikube

Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

Docker & Kubernetes 3 : minikube Django with Redis and Celery

Docker & Kubernetes 4 : Django with RDS via AWS Kops

Docker & Kubernetes : Kops on AWS

Docker & Kubernetes : Ingress controller on AWS with Kops

Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

Docker & Kubernetes : DaemonSet

Docker & Kubernetes : Secrets

Docker & Kubernetes : kubectl command

Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

Docker & Kubernetes : Configure a Pod to Use a ConfigMap

AWS : EKS (Elastic Container Service for Kubernetes)

Docker & Kubernetes : Run a React app in a minikube

Docker & Kubernetes : Minikube install on AWS EC2

Docker & Kubernetes : Cassandra with a StatefulSet

Docker & Kubernetes : Terraform and AWS EKS

Docker & Kubernetes : Pods and Service definitions

Docker & Kubernetes : Headless service and discovering pods

Docker & Kubernetes : Service IP and the Service Type

Docker & Kubernetes : Kubernetes DNS with Pods and Services

Docker & Kubernetes - Scaling and Updating application

Docker & Kubernetes : Horizontal pod autoscaler on minikubes

Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

Docker & Kubernetes : Rolling updates

Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

Docker & Kubernetes : Nginx Ingress Controller on minikube

Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

Docker & Kubernetes : MongoDB / MongoExpress on Minikube

Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : RBAC

Docker & Kubernetes Service Account, RBAC, and IAM

Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

Docker & Kubernetes : Helm Chart

Docker & Kubernetes : My first Helm deploy

Docker & Kubernetes : Readiness and Liveness Probes

Docker & Kubernetes : Helm chart repository with Github pages

Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

Docker & Kubernetes : Istio on EKS

Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

Docker & Kubernetes : Spinnaker on EKS with Halyard

Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

Docker & Kubernetes : Jenkins-X on EKS

Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

Docker & Kubernetes : ArgoCD on Kubernetes cluster

Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook




Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Ansible 2.0



What is Ansible?

Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

SSH connection & running commands

Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

Modules

Playbooks

Handlers

Roles

Playbook for LAMP HAProxy

Installing Nginx on a Docker container

AWS : Creating an ec2 instance & adding keys to authorized_keys

AWS : Auto Scaling via AMI

AWS : creating an ELB & registers an EC2 instance from the ELB

Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

Setting up Apache web server

Deploying a Go app to Minikube

Ansible with Terraform





Terraform



Introduction to Terraform with AWS elb & nginx

Terraform Tutorial - terraform format(tf) and interpolation(variables)

Terraform Tutorial - user_data

Terraform Tutorial - variables

Terraform 12 Tutorial - Loops with count, for_each, and for

Terraform Tutorial - creating multiple instances (count, list type and element() function)

Terraform Tutorial - State (terraform.tfstate) & terraform import

Terraform Tutorial - Output variables

Terraform Tutorial - Destroy

Terraform Tutorial - Modules

Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

Terraform Tutorial - AWS ASG and Modules

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

Terraform Tutorial - AWS ECS using Fargate : Part I

Hashicorp Vault

HashiCorp Vault Agent

HashiCorp Vault and Consul on AWS with Terraform

Ansible with Terraform

AWS IAM user, group, role, and policies - part 1

AWS IAM user, group, role, and policies - part 2

Delegate Access Across AWS Accounts Using IAM Roles

AWS KMS

terraform import & terraformer import

Terraform commands cheat sheet

Terraform Cloud

Terraform 14

Creating Private TLS Certs





DevOps



Phases of Continuous Integration

Software development methodology

Introduction to DevOps

Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

Artifact repository and repository management

Linux - General, shell programming, processes & signals ...

RabbitMQ...

MariaDB

New Relic APM with NodeJS : simple agent setup on AWS instance

Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

Datadog - Monitoring with PagerDuty/HipChat and APM

Install and Configure Mesos Cluster

Cassandra on a Single-Node Cluster

Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

OpenStack install on Ubuntu 16.04 server - DevStack

AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

CI/CD with CircleCI - Heroku deploy

Introduction to Terraform with AWS elb & nginx

Docker & Kubernetes

Kubernetes I - Running Kubernetes Locally via Minikube

Kubernetes II - kops on AWS

Kubernetes III - kubeadm on AWS

AWS : EKS (Elastic Container Service for Kubernetes)

CI/CD Github actions

CI/CD Gitlab



DevOps / Sys Admin Q & A



(1A) - Linux Commands

(1B) - Linux Commands

(2) - Networks

(2B) - Networks

(3) - Linux Systems

(4) - Scripting (Ruby/Shell)

(5) - Configuration Management

(6) - AWS VPC setup (public/private subnets with NAT)

(6B) - AWS VPC Peering

(7) - Web server

(8) - Database

(9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

(10) - Trouble Shooting: Load, Throughput, Response time and Leaks

(11) - SSH key pairs, SSL Certificate, and SSL Handshake

(12) - Why is the database slow?

(13) - Is my web site down?

(14) - Is my server down?

(15) - Why is the server sluggish?

(16A) - Serving multiple domains using Virtual Hosts - Apache

(16B) - Serving multiple domains using server block - Nginx

(16C) - Reverse proxy servers and load balancers - Nginx

(17) - Linux startup process

(18) - phpMyAdmin with Nginx virtual host as a subdomain

(19) - How to SSH login without password?

(20) - Log Rotation

(21) - Monitoring Metrics

(22) - lsof

(23) - Wireshark introduction

(24) - User account management

(25) - Domain Name System (DNS)

(26) - NGINX SSL/TLS, Caching, and Session

(27) - Troubleshooting 5xx server errors

(28) - Linux Systemd: journalctl

(29) - Linux Systemd: FirewallD

(30) - Linux: SELinux

(31) - Linux: Samba

(0) - Linux Sys Admin's Day to Day tasks





Jenkins



Install

Configuration - Manage Jenkins - security setup

Adding job and build

Scheduling jobs

Managing_plugins

Git/GitHub plugins, SSH keys configuration, and Fork/Clone

JDK & Maven setup

Build configuration for GitHub Java application with Maven

Build Action for GitHub Java application with Maven - Console Output, Updating Maven

Commit to changes to GitHub & new test results - Build Failure

Commit to changes to GitHub & new test results - Successful Build

Adding code coverage and metrics

Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

Jenkins on EC2 - Creating a Maven project

Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

Jenkins on EC2 - Line Coverage with JaCoCo plugin

Setting up Master and Slave nodes

Jenkins Build Pipeline & Dependency Graph Plugins

Jenkins Build Flow Plugin

Pipeline Jenkinsfile with Classic / Blue Ocean

Jenkins Setting up Slave nodes on AWS

Jenkins Q & A





Puppet



Puppet with Amazon AWS I - Puppet accounts

Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

Puppet with Amazon AWS III - Puppet running Hello World

Puppet Code Basics - Terminology

Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

Puppet master /agent ubuntu 14.04 install on EC2 nodes

Puppet master post install tasks - master's names and certificates setup,

Puppet agent post install tasks - configure agent, hostnames, and sign request

EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

EC2 Puppet - Install lamp with a manifest ('puppet apply')

EC2 Puppet - Install lamp with a module

Puppet variable scope

Puppet packages, services, and files

Puppet packages, services, and files II with nginx Puppet templates

Puppet creating and managing user accounts with SSH access

Puppet Locking user accounts & deploying sudoers file

Puppet exec resource

Puppet classes and modules

Puppet Forge modules

Puppet Express

Puppet Express 2

Puppet 4 : Changes

Puppet --configprint

Puppet with Docker

Puppet 6.0.2 install on Ubuntu 18.04





Chef



What is Chef?

Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

Setting up Hosted Chef server

VirtualBox via Vagrant with Chef client provision

Creating and using cookbooks on a VirtualBox node

Chef server install on Ubuntu 14.04

Chef workstation setup on EC2 Ubuntu 14.04

Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





Elasticsearch search engine, Logstash, and Kibana



Elasticsearch, search engine

Logstash with Elasticsearch

Logstash, Elasticsearch, and Kibana 4

Elasticsearch with Redis broker and Logstash Shipper and Indexer

Samples of ELK architecture

Elasticsearch indexing performance



Vagrant



VirtualBox & Vagrant install on Ubuntu 14.04

Creating a VirtualBox using Vagrant

Provisioning

Networking - Port Forwarding

Vagrant Share

Vagrant Rebuild & Teardown

Vagrant & Ansible





Big Data & Hadoop Tutorials



Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

Hadoop - Running MapReduce Job

Hadoop - Ecosystem

CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

CDH5 APIs

QuickStart VMs for CDH 5.3

QuickStart VMs for CDH 5.3 II - Testing with wordcount

QuickStart VMs for CDH 5.3 II - Hive DB query

Scheduled start and stop CDH services

CDH 5.8 Install with QuickStarts Docker

Zookeeper & Kafka Install

Zookeeper & Kafka - single node single broker

Zookeeper & Kafka - Single node and multiple brokers

OLTP vs OLAP

Apache Hadoop Tutorial I with CDH - Overview

Apache Hadoop Tutorial II with CDH - MapReduce Word Count

Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

Apache Hadoop (CDH 5) Hive Introduction

CDH5 - Hive Upgrade to 1.3 to from 1.2

Apache Hive 2.1.0 install on Ubuntu 16.04

Apache HBase in Pseudo-Distributed mode

Creating HBase table with HBase shell and HUE

Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

Creating HBase table with Java API

HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

Flume with CDH5: a single-node Flume deployment (telnet example)

Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

List of Apache Hadoop hdfs commands

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

Apache Hadoop : Creating Wordcount Maven Project with Eclipse

Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

Spark 1.2 using VirtualBox and QuickStart VM - wordcount

Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

Apache Spark 2.0.2 tutorial with PySpark : RDD

Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

Apache Spark Streaming with Kafka and Cassandra

Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

Apache Spark 1.2 Streaming

Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

Apache Drill - Query File System, JSON, and Parquet

Apache Drill - HBase query

Apache Drill - Hive query

Apache Drill - MongoDB query





Redis In-Memory Database



Redis vs Memcached

Redis 3.0.1 Install

Setting up multiple server instances on a Linux host

Redis with Python

ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer



GCP (Google Cloud Platform)



GCP: Creating an Instance

GCP: gcloud compute command-line tool

GCP: Deploying Containers

GCP: Kubernetes Quickstart

GCP: Deploying a containerized web application via Kubernetes

GCP: Django Deploy via Kubernetes I (local)

GCP: Django Deploy via Kubernetes II (GKE)





AWS (Amazon Web Services)



AWS : EKS (Elastic Container Service for Kubernetes)

AWS : Creating a snapshot (cloning an image)

AWS : Attaching Amazon EBS volume to an instance

AWS : Adding swap space to an attached volume via mkswap and swapon

AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

AWS : Creating an instance to a new region by copying an AMI

AWS : S3 (Simple Storage Service) 1

AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

AWS : S3 (Simple Storage Service) 4 - Uploading a large file

AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

AWS : Creating a CloudFront distribution with an Amazon S3 origin

AWS : Creating VPC with CloudFormation

WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

AWS : CloudWatch & Logs with Lambda Function / S3

AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

AWS : Lambda and SNS - cross account

AWS : CLI (Command Line Interface)

AWS : CLI (ECS with ALB & autoscaling)

AWS : ECS with cloudformation and json task definition

AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

AWS : Load Balancing with HAProxy (High Availability Proxy)

AWS : VirtualBox on EC2

AWS : NTP setup on EC2

AWS: jq with AWS

AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

AWS : OpenVPN Access Server 2 Install

AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

AWS : Autoscaling group (ASG)

AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

AWS : Adding a SSH User Account on Linux Instance

AWS : Windows Servers - Remote Desktop Connections using RDP

AWS : Scheduled stopping and starting an instance - python & cron

AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

AWS : Elastic Beanstalk with NodeJS

AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

AWS : Identity and Access Management (IAM) Roles for Amazon EC2

AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

AWS : Identity and Access Management (IAM) sts assume role via aws cli2

AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

AWS : Amazon Route 53

AWS : Amazon Route 53 - DNS (Domain Name Server) setup

AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

AWS Amazon Route 53 : Private Hosted Zone

AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

AWS : Lambda with AWS CloudTrail

AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

AWS : Redshift data warehouse

AWS : CloudFormation - templates, change sets, and CLI

AWS : CloudFormation Bootstrap UserData/Metadata

AWS : CloudFormation - Creating an ASG with rolling update

AWS : Cloudformation Cross-stack reference

AWS : OpsWorks

AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

AWS CodeDeploy : Deploy an Application from GitHub

AWS EC2 Container Service (ECS)

AWS EC2 Container Service (ECS) II

AWS Hello World Lambda Function

AWS Lambda Function Q & A

AWS Node.js Lambda Function & API Gateway

AWS API Gateway endpoint invoking Lambda function

AWS API Gateway invoking Lambda function with Terraform

AWS API Gateway invoking Lambda function with Terraform - Lambda Container

Amazon Kinesis Streams

Kinesis Data Firehose with Lambda and ElasticSearch

Amazon DynamoDB

Amazon DynamoDB with Lambda and CloudWatch

Loading DynamoDB stream to AWS Elasticsearch service with Lambda

Amazon ML (Machine Learning)

Simple Systems Manager (SSM)

AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

AWS : RDS Importing and Exporting SQL Server Data

AWS : RDS PostgreSQL & pgAdmin III

AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

AWS : MySQL Replication : Master-slave

AWS : MySQL backup & restore

AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

AWS : Restoring Postgres on EC2 instance from S3 backup

AWS : Q & A

AWS : Security

AWS : Security groups vs. network ACLs

AWS : Scaling-Up

AWS : Networking

AWS : Single Sign-on (SSO) with Okta

AWS : JIT (Just-in-Time) with Okta





Powershell 4 Tutorial



Powersehll : Introduction

Powersehll : Help System

Powersehll : Running commands

Powersehll : Providers

Powersehll : Pipeline

Powersehll : Objects

Powershell : Remote Control

Windows Management Instrumentation (WMI)

How to Enable Multiple RDP Sessions in Windows 2012 Server

How to install and configure FTP server on IIS 8 in Windows 2012 Server

How to Run Exe as a Service on Windows 2012 Server

SQL Inner, Left, Right, and Outer Joins





Git/GitHub Tutorial



One page express tutorial for GIT and GitHub

Installation

add/status/log

commit and diff

git commit --amend

Deleting and Renaming files

Undoing Things : File Checkout & Unstaging

Reverting commit

Soft Reset - (git reset --soft <SHA key>)

Mixed Reset - Default

Hard Reset - (git reset --hard <SHA key>)

Creating & switching Branches

Fast-forward merge

Rebase & Three-way merge

Merge conflicts with a simple example

GitHub Account and SSH

Uploading to GitHub

GUI

Branching & Merging

Merging conflicts

GIT on Ubuntu and OS X - Focused on Branching

Setting up a remote repository / pushing local project and cloning the remote repo

Fork vs Clone, Origin vs Upstream

Git/GitHub Terminologies

Git/GitHub via SourceTree II : Branching & Merging

Git/GitHub via SourceTree III : Git Work Flow

Git/GitHub via SourceTree IV : Git Reset

Git wiki - quick command reference






Subversion

Subversion Install On Ubuntu 14.04

Subversion creating and accessing I

Subversion creating and accessing II








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master