BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Docker / Kubernetes - Scaling and Updating application

Docker_Icon.png Kubernetes-Icon.png




Bookmark and Share





bogotobogo.com site search:






Note

This post starts from the basics of the Kubernetes cluster orchestration. We'll learn major Kubernetes features and concepts including how we can scale and update an app via Minikube:

  1. Deploy a containerized application on a cluster.
  2. Scale the deployment.
  3. Update the containerized application with a new software version.
  4. Debug the containerized application.

This post is largely based on https://kubernetes.io/docs/tutorials/kubernetes-basics/.


Prerequisite: minikube / kubectl should be installed:

$ minikube version
minikube version: v1.9.2

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.9", GitCommit:"16236ce91790d4c75b79f6ce96841db1c843e7d2", GitTreeState:"clean", BuildDate:"2019-03-27T14:42:18Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

kubectl is configured and we can see both the version of the client and as well as the server. The client version is the kubectl version; the server version is the Kubernetes version installed on the master.


Start the cluster, by running the minikube start command:

$ minikube start
😄  minikube v1.9.2 on Darwin 10.13.3
    ▪ KUBECONFIG=/Users/kihyuckhong/.kube/config
✨  Using the hyperkit driver based on existing profile
👍  Starting control plane node m01 in cluster minikube
🔄  Restarting existing hyperkit VM for "minikube" ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

❗  /Users/kihyuckhong/bin/kubectl is v1.11.9, which may be incompatible with Kubernetes v1.18.0.
💡  You can also use 'minikube kubectl -- get pods' to invoke a matching version

Minikube started a virtual machine for us, and a Kubernetes cluster is now running in that VM.







Creating a cluster

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, we'll use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on our local machine and deploys a simple cluster containing only one node.

The Minikube CLI provides basic bootstrapping operations for working with our cluster, including start, stop, status, and delete.


Let's view the cluster details. We'll do that by running kubectl cluster-info:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.64.2:8443
KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

To view the nodes in the cluster, run the kubectl get nodes command:

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    23h       v1.18.0   

This command shows all nodes that can be used to host our applications. Now we have only one node, and we can see that its status is ready (it is ready to accept applications for deployment).







Docker deploy

Before we deploy to Kubernetes cluster, let's do it with plain Docker.

For the Deployment, we'll use a Node.js application packaged in a Docker container from the Hello Minikube tutorial as shown in the pictue below:

hello-minikube-nodejs-Dockerfile.png

Let's just run it on local:

$ ls
Dockerfile	server.js

$ node server.js

On anotehr terminal:

$ curl -i localhost:8080
HTTP/1.1 200 OK
Date: Fri, 01 May 2020 17:27:06 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Hello World!

Now, let's run it within a container using Docker:

$ docker build -t echoserver .
...
Successfully tagged echoserver:latest

$ docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED              SIZE
echoserver                                                        latest              f42b5d3cbd29        About a minute ago   660MB


Running our image with -d runs the container in a detached mode, leaving the container running in the background. The -p flag redirects a public (host) port to a private port inside the container. Run the image we built:

$ docker run -p 8080:8080 -d echoserver
f9e57d92323b15e67643aca414d427857301298429830750dd6e249aa3b39832

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
f9e57d92323b        echoserver          "node server.js"    4 seconds ago       Up 3 seconds        0.0.0.0:8080->8080/tcp   keen_golick    

$ curl -i localhost:8080
HTTP/1.1 200 OK
Date: Fri, 01 May 2020 17:34:03 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Hello World!
Run a container in a pod

Kubernetes is usually configured using YAML files. Here is a YAML file (pod.yaml) for running the hello-node image in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: hello-node-pod
  labels:
    app: hello-node
spec:
  containers:
  - name: hello-node
    image: dockerbogo/echoserver:v1
    imagePullPolicy: IfNotPresent    

  1. The definition uses the Kubernetes API v1.
  2. The kind of resource being defined is a Pod.
  3. There is some metadata and a specification for the Pod.
    1. In the metadata there is a name for the Pod, and a label is also applied.
    2. The specification says what is to go inside the pod. There is just one container called hello-node, and the image is dockerbogo/echoserver:v1. Each container needs a name for identification purposes.
    3. Setting imagePullPolicy to Never means that the node pulls the image only if it is not already present locally.

Because minikube, by default, pulls an image from public repo, let's push it to dockerhub:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: dockerbogo
Password: 
Login Succeeded

$ docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
echoserver                                                        latest              f42b5d3cbd29        1 hours ago         660MB    

$ docker tag f42b5d3cbd29 dockerbogo/echoserver:v1

$ docker push dockerbogo/echoserver 
The push refers to repository [docker.io/dockerbogo/echoserver]
a8f92026c82f: Pushed 
aeaa1edefd60: Mounted from library/node 
...


We can tell Kubernetes to act on the contents of a YAML file with the apply subcommand:

$ kubectl apply -f pod.yaml
pod/hello-node-pod created

Now that the cluster knows about the pod, we can use get for the information about it:

$ kubectl get pods
NAME             READY     STATUS    RESTARTS   AGE
hello-node-pod   1/1       Running   0          3m    

We can use describe to get more information about the pod:

$ kubectl describe pod hello-node-pod
Name:               hello-node-pod
Namespace:          default
...
Labels:             app=hello-node
Annotations:        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"hello-node"},"name":"hello-node-pod","namespace":"default"},"spec":{"cont...
Status:             Running
IP:                 172.17.0.4
Containers:
  hello-node:
    Container ID:   docker://fff26073236a3feade75323b2e832b099b71d2859362bea846fae675c08d6d6d
    Image:          dockerbogo/echoserver:v1
    ...
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pjv8t (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-pjv8t:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pjv8t
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
...

  1. The name of the pod is as specified in the pod's metadata.
  2. We see information about the pod and about the container within it. In the Containers section of the describe output you should see the container called hello. It includes the Container ID, as well as the name and ID of the image this container was created from.
  3. In the description we should see an IP address. Each pod in Kubernetes gets allocated its own IP address.

Kubernetes performs several actions when it is asked to run a pod:

  1. It selects a node for the pod to run on. In this scenario there is only one node so it's a very simple choice.
  2. If the node doesn't already have a copy of the container image for each container in the pod specification, it will pull it from the container registry.
  3. Once pulled, the node can start running the container(s) for the pod.
  4. The component on each node that runs containers is called the kubelet. The kubelet has done the conceptual equivalent of docker run for us.
  5. In practice the component that runs containers might be Docker, or it might be an alternative such as RedHat's cri-o. This is called the runtime.

To stop the code from running, we need to delete the pod:

$ kubectl delete pod hello-node-pod
pod "hello-node-pod" deleted





Creating a deployment

Once we have a running Kubernetes cluster, we can deploy our containerized applications on top of it.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    23h       v1.18.0    

Here we see the available nodes (1 in our case). Kubernetes will choose where to deploy our application based on Node available resources.


To do so, we create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of our application.

Once we've created a Deployment, the Kubernetes master schedules the application instances included in that Deployment to run on individual Nodes in the cluster.

We can create and manage a Deployment by using the Kubernetes command line interface, kubectl which uses the Kubernetes API to interact with the cluster.

When we create a Deployment, we'll need to specify the container image for our application and the number of replicas that we want to run.


One of the benefits of Kubernetes is the ability to run multiple instances of a pod. The easiest way to do this is with a deployment. The following deployment.yaml defines a deployment that will run two instances (replicas) of pods that are practically identical to the pod we just ran:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-node-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-node
  template:
    metadata:
      labels:
        app: hello-node
    spec:
      containers:
      - name: hello-node
        image: hdockerbogo/echoserver:v1    

  1. The first metadata of the file applies to the deployment object (not the pods).
  2. The pods take their definition from the template part of the YAML definition. This includes the metadata that will apply to the pods.
  3. Kubernetes will autogenerate a name for each pod based on the deployment name plus some random characters.

Let's apply this deployment:

$ kubectl apply -f deployment.yaml
deployment.apps/hello-node-deployment created

$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          14s
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          14s

We just deployed our first application by creating a deployment. This performed a few things for us:

  1. searched for a suitable node where an instance of the application could be run (we have only 1 available node)
  2. scheduled the application to run on that Node
  3. configured the cluster to reschedule the instance on a new Node when needed

Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same kubernetes cluster, but not outside that network. When we use kubectl, we're interacting through an API endpoint to communicate with our application.

The kubectl command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running. Let's run the proxy:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Now, we have a connection between our host (the online terminal) and the Kubernetes cluster. The proxy enables direct access to the API from these terminals.

We can see all those APIs hosted through the proxy endpoint. For example, we can query the version directly through the API using the curl command:


On a second terminal window:

$ curl http://localhost:8001/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.0",
  "gitCommit": "9e991415386e4cf155a24b1da15becaa390438d8",
  "gitTreeState": "clean",
  "buildDate": "2020-03-25T14:50:46Z",
  "goVersion": "go1.13.8",
  "compiler": "gc",
  "platform": "linux/amd64"
}

The API server will automatically create an endpoint for each pod, based on the pod name, that is also accessible through the proxy.

When we have multiple instances of a container image, we'll typically want to load balance requests to them so that they can share the load of incoming requests. This is achieved in Kubernetes with a service. In other words, in order for the new deployment to be accessible without using the Proxy, a service is required which will be explained in the next step.







Creating a Service

A Kubernetes Service groups together a collection of pods and makes them accessible as a service. Here is the file, service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-node-svc
spec:
  type: NodePort
  ports:
  - targetPort: 8080
    port: 30000
  selector:
    app: hello-node    

  1. The type of the service you're using here is NodePort.
  2. This service maps a request to the host's port 30000 to port 8080 on any of the service pods.
  3. The service uses the app: hello-node label as a selector to identify the pods that it will load balance between.

Make sure that the deployment that we ran earlier still exists:

$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          163m
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          163m    

Let's create a service:

$ kubectl apply -f service.yaml    
service/hello-node-svc created

$ kubectl get svc
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
hello-node-svc   NodePort    10.106.113.201   <none>        30000:32427 /TCP   5s
kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP           47h

As we can see the Cluster IP address has been allocated to this service. We can use this address to make requests to the service, within cluster. But from outside (from host machine), we can access the service like this:

$ curl $(minikube ip):32427
Hello World!    

Note that we can get the cluster IP info:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.64.2:8443
KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.    

Or:

$ minikube ip
192.168.64.2

Let's clean up the service:

$ kubectl delete -f service.yaml
service "hello-node-svc" deleted




Using a Service to Expose Our App

In this section we will learn how to expose Kubernetes applications outside the cluster using the kubectl expose command instead of applying the service yaml file. We will also learn how to view and apply labels to objects with the kubectl label command.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    23h       v1.18.0    

$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          3h23m
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          3h23m

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d

$ kubectl get deployments
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   2/2       2            2           3h25m

We have a Service called kubernetes that is created by default when minikube starts the cluster.

To create a new service and expose it to external traffic we'll use the expose command with NodePort as parameter (minikube does not support the LoadBalancer option yet).

$ kubectl expose deployment/hello-node-deployment --type="NodePort" --port 8080
service/hello-node-deployment exposed    

$ kubectl get svc
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node-deployment   NodePort    10.106.158.78   <none>        8080:30518/TCP   58s
kubernetes              ClusterIP   10.96.0.1       <none>        443/TCP          2d

We have now a running Service called hello-node-deployment. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).

To find out what port was opened externally (by the NodePort option) we’ll run the describe service sub command:

$ kubectl describe services/hello-node-deployment
Name:                     hello-node-deployment
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.106.158.78
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30518/TCP
Endpoints:                172.17.0.4:8080,172.17.0.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>    

Create an environment variable called NODE_PORT that has the value of the Node port assigned:

$ export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')

$ echo NODE_PORT=$NODE_PORT  
NODE_PORT=30518

Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port, curl $(minikube ip):$NODE_PORT:

$ curl $(minikube ip):$NODE_PORT  
Hello World!

From the response, we can see the Service is exposed!







Connecting service from within cluster

In the previous section, we were able to get the response from the node server via exposed NodePort from outside the cluster:

$ minikube ip
192.168.64.2   

$ curl $(minikube ip):$NODE_PORT  
Hello World!

How about from within?

There are two ways of connecting to the hello-node service within the cluster (in our case from the "alpine" pod).

We will access the hello-node service from alpine pod:

$ kubectl run --generator=run-pod/v1 --image=alpine -it my-alpine-shell -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # apk update
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
v3.11.6-10-g3d1aef7a83 [http://dl-cdn.alpinelinux.org/alpine/v3.11/main]
v3.11.6-13-g5da24b5794 [http://dl-cdn.alpinelinux.org/alpine/v3.11/community]
OK: 11270 distinct packages available

/ # apk add curl
(1/4) Installing ca-certificates (20191127-r1)
(2/4) Installing nghttp2-libs (1.40.0-r0)
(3/4) Installing libcurl (7.67.0-r0)
(4/4) Installing curl (7.67.0-r0)
Executing busybox-1.31.1-r9.trigger
Executing ca-certificates-20191127-r1.trigger
OK: 7 MiB in 18 packages
/ # curl 10.106.158.78:8080
Hello World!/ # 
/ # curl 172.17.0.4:8080
Hello World!/ # 
/ # curl 172.17.0.7:8080
Hello World!/ # 

The IPs and the port info is available from the output of th following commands:

$ kubectl get svc
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node-deployment   NodePort    10.106.158.78   <none>        8080:30518/TCP   123m
kubernetes              ClusterIP   10.96.0.1       <none>        443/TCP          2d2h    

$ kubectl get pods -o wide
NAME                                     READY     STATUS    RESTARTS   AGE       IP           NODE       NOMINATED NODE   READINESS GATES
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          6h16m     172.17.0.4   minikube   <none>           <none>
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          6h16m     172.17.0.7   minikube   <none>           <none>
my-alpine-shell                          1/1       Running   0          15m       172.17.0.8   minikube   <none>           <none>

$ kubectl get ep
NAME                    ENDPOINTS                         AGE
hello-node-deployment   172.17.0.4:8080,172.17.0.7:8080   149m
kubernetes              192.168.64.2:8443                 2d3h

As a summary, here are the two ways of connecting to the service within the cluster:

  1. Via ClusterIP:
  2. / # curl 10.106.158.78:8080
    

  3. Via pod IP:
  4. / # curl 172.17.0.4:8080
    / # curl 172.17.0.7:8080
    




Using labels

The Deployment created automatically a label for our Pod. With describe deployment command we can see the name of the label:

$ kubectl describe services/hello-node-deployment
Name:                     hello-node-deployment
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.106.158.78
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30518/TCP
Endpoints:                172.17.0.4:8080,172.17.0.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

$ kubectl get pods -l app=hello-node
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          16h
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          16h    

We will apply a new label to our Pod (we pinned the application version to the Pod), and we can check it with the describe pod command:

$ kubectl label pods hello-node-deployment-5c6bb445fb-g25n5 new-label=awesome
pod/hello-node-deployment-5c6bb445fb-g25n5 labeled    

$ kubectl describe pod/hello-node-deployment-5c6bb445fb-g25n5
Name:               hello-node-deployment-5c6bb445fb-g25n5
Namespace:          default
Priority:           0
PriorityClassName:  
Node:               minikube/192.168.64.2
Start Time:         Fri, 01 May 2020 15:43:59 -0700
Labels:             app=hello-node
                    new-label=awesome
                    pod-template-hash=5c6bb445fb
...

We see here that the label is attached now to our Pod.

And we can query now the list of pods using the new label:

$ kubectl get pods -l new-label=awesome
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          17h    




Deleting a Service

To delete Services we can use the delete service command.

$ kubectl delete service hello-node-deployment
service "hello-node-deployment" deleted

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d14h    

This confirms that our Service was removed. To confirm that route is not exposed anymore we can curl the previously exposed IP and port:

$ curl $(minikube ip):30518  
curl: (7) Failed to connect to 192.168.64.2 port 30518: Connection refused

This proves that the app is not reachable anymore from outside of the cluster. We can confirm that the app is still running with a curl inside the pod:

$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          17h
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          17h

$ kubectl exec -it hello-node-deployment-5c6bb445fb-g25n5 curl localhost:8080
Hello World!

We see here that the application is up. This is because the Deployment is managing the application. To shut down the application, we would need to delete the Deployment as well.





Scaling an application

In the previous sections, we created a Deployment, and then exposed it publicly via a Service. The Deployment created two Pods for running our application. When traffic increases, we will need to scale the application to keep up with user demand.

Scaling is accomplished by changing the number of replicas in a Deployment.

$ kubectl get deployments
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   2/2       2            2           18h    

AVAILABLE displays how many replicas of the application are available to users.

To see the ReplicaSet created by the Deployment, run kubectl get rs:

$ kubectl get rs
NAME                               DESIRED   CURRENT   READY     AGE
hello-node-deployment-5c6bb445fb   2         2         2         18h    

Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[RANDOM-STRING]. The random string is randomly generated and uses the pod-template-hash as a seed.

Two important columns of this command are:

  1. DESIRED displays the desired number of replicas of the application, which we define when we create the Deployment. This is the desired state.
  2. CURRENT displays how many replicas are currently running.

Next, let's scale the Deployment to 3 replicas. We'll use the kubectl scale command, followed by the deployment type, name and desired number of instances:

$ kubectl scale deployment/hello-node-deployment --replicas=3
deployment.apps/hello-node-deployment scaled

$ kubectl get rs
NAME                               DESIRED   CURRENT   READY     AGE
hello-node-deployment-5c6bb445fb   3         3         3         18h

The change was applied, and we have 3 instances of the application available. Next, let's check if the number of Pods changed:

$ kubectl get pods -o wide
NAME                                     READY     STATUS    RESTARTS   AGE       IP           NODE       NOMINATED NODE   READINESS GATES
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          18h       172.17.0.4   minikube   <none>           <none>
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          18h       172.17.0.7   minikube   <none>           <none>
hello-node-deployment-5c6bb445fb-vmp59   1/1       Running   0          2m48s     172.17.0.9   minikube   <none>           <none>    

There are 3 Pods now, with different IP addresses. The change was registered in the Deployment events log. To check that, use the describe command:

$ kubectl describe deployments/hello-node-deployment
Name:                   hello-node-deployment
Namespace:              default
...
Selector:               app=hello-node
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=hello-node
  Containers:
   hello-node:
    Image:        dockerbogo/echoserver:v1
    ...
NewReplicaSet:   hello-node-deployment-5c6bb445fb (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  5m    deployment-controller  Scaled up replica set hello-node-deployment-5c6bb445fb to 3    






Load Balancing

Let's check if the Service is load-balancing the traffic. To find out the exposed IP and Port we can use the describe service as we learned in the previous sections.

Becase we deleted our service, we need to create the service again:

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d15h

$ kubectl expose deployment/hello-node-deployment --type="NodePort" --port 8080
service/hello-node-deployment exposed

$ kubectl get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hello-node-deployment   NodePort    10.108.150.219   <none>        8080:31090/TCP   3s
kubernetes              ClusterIP   10.96.0.1        <none>        443/TCP          2d15h    

Next, we'll do a curl to the exposed IP and port:

$ curl $(minikube ip):31090
Hello World!    

Execute the command multiple times and probably, we hit a different Pod with every request.





Scale down

To scale down the Service to 2 replicas, run again the scale command:

$ kubectl scale deployment/hello-node-deployment --replicas=2
deployment.apps/hello-node-deployment scaled

$ kubectl get rs
NAME                               DESIRED   CURRENT   READY     AGE
hello-node-deployment-5c6bb445fb   2         2         2         18h    






Updating an application

Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

In Kubernetes, updates are versioned and any Deployment update can be reverted to a previous (stable) version.

Rolling updates allow the following actions:

  1. Promote an application from one environment to another (via container image updates)
  2. Rollback to previous versions
  3. Continuous Integration and Continuous Delivery of applications with zero downtime

$ kubectl get deployments
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   2/2       2            2           19h

$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
hello-node-deployment-5c6bb445fb-g25n5   1/1       Running   0          19h
hello-node-deployment-5c6bb445fb-lwxvc   1/1       Running   0          19h

To view the current image version of the app, run a describe command against the Pods (look at the Image field):

$ kubectl describe pods
Name:               hello-node-deployment-5c6bb445fb-g25n5
Namespace:          default
...
Containers:
  hello-node:
    Container ID:   docker://85098d12fb415be7a6d958d9c91b476d15fa66cf42f349b8d9a9f603e34cf88a
    Image:          dockerbogo/echoserver:v1
    Image ID:       docker-pullable://dockerbogo/echoserver@sha256:eb09a387eb751fd7a00fb59de8117c7084e88350a3e4246ae40ebf4613e6e55c    
...

To update the image of the application to version 2, use the set image command, followed by the deployment name and container=new image: version:

$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
dockerbogo/echoserver   v1                  f42b5d3cbd29        26 hours ago        660MB

$ docker tag f42b5d3cbd29 dockerbogo/echoserver:v2

$ docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
dockerbogo/echoserver   v1                  f42b5d3cbd29        26 hours ago        660MB
dockerbogo/echoserver   v2                  f42b5d3cbd29        26 hours ago        660MB

$ docker push dockerbogo/echoserver
The push refers to repository [docker.io/dockerbogo/echoserver]
...
v1: digest: sha256:eb09a387eb751fd7a00fb59de8117c7084e88350a3e4246ae40ebf4613e6e55c size: 2214
...
v2: digest: sha256:eb09a387eb751fd7a00fb59de8117c7084e88350a3e4246ae40ebf4613e6e55c size: 2214

$ kubectl set image deployments/hello-node-deployment \
hello-node=dockerbogo/echoserver:v2  
deployment.apps/hello-node-deployment image updated

The command notified the Deployment to use a different image for our app and initiated a rolling update.

Check the status of the new Pods:

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
hello-node-deployment-699574469-8nxpn   1/1       Running   0          6m15s
hello-node-deployment-699574469-c2245   1/1       Running   0          6m20s  

$ kubectl describe pods 
Name:               hello-node-deployment-699574469-8nxpn
Namespace:          default
...
Status:             Running
IP:                 172.17.0.9
Controlled By:      ReplicaSet/hello-node-deployment-699574469
Containers:
  hello-node:
    Container ID:   docker://874846f8d57d9c29b1d3c38003393773bc283356a09be9ae7f6076fbdc7b062c
    Image:          dockerbogo/echoserver:v2
    Image ID:       docker-pullable://dockerbogo/echoserver@sha256:eb09a387eb751fd7a00fb59de8117c7084e88350a3e4246ae40ebf4613e6e55c
...

The update can be confirmed also by running a rollout status command:

$ kubectl rollout status deployments/hello-node-deployment
deployment "hello-node-deployment" successfully rolled out    






Rollback an update

When we introduce a change that breaks production, we should have a plan to roll back that change.

Kubernetes has a built-in rollback mechanism: kubectl offers a simple mechanism to roll back changes to resources such as Deployments, StatefulSets and DaemonSets.

Since the replicas is a field in the Deployment, we might be tempted to conclude that is the Deployment's job to count the number of Pods and create or delete them. However, this is not the case. Deployments delegate counting Pods to another component: the ReplicaSet

Every time we create a Deployment, the deployment creates a ReplicaSet and delegates creating (and deleting) the Pods.

replicaset-deployment.png

Source: Deployments, ReplicaSets, and pods


The sole responsibility for the ReplicaSet is to count Pods while the Deployment manages ReplicaSets and orchestrates the rolling update.

rollback_rollout.png

Source: Deployments, ReplicaSets, and pods


Let's perform another update, and deploy image tagged as v3 which does not exist:

$ kubectl scale deployment/hello-node-deployment --replicas=4
deployment.apps/hello-node-deployment scaled

$ kubectl get deployments
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   4/4       4            4           104s

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
hello-node-deployment-699574469-lb8dj   1/1       Running   0          23s
hello-node-deployment-699574469-t8cjb   1/1       Running   0          62s
hello-node-deployment-699574469-tg8b9   1/1       Running   0          23s
hello-node-deployment-699574469-wbn8s   1/1       Running   0          62s

$ kubectl set image deployment/hello-node-deployment \
hello-node=dockerbogo/echoserver:v3
deployment.apps/hello-node-deployment image updated

$ kubectl get deployments
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   3/4       2            3           2m44s

$ kubectl get pods
NAME                                     READY     STATUS         RESTARTS   AGE
hello-node-deployment-699574469-t8cjb    1/1       Running        0          3m10s
hello-node-deployment-699574469-tg8b9    1/1       Running        0          2m31s
hello-node-deployment-699574469-wbn8s    1/1       Running        0          3m10s
hello-node-deployment-6ff6f5d986-6xfnl   0/1       ErrImagePull   0          34s
hello-node-deployment-6ff6f5d986-gg9qc   0/1       ErrImagePull   0          34s

And something is wrong and that's because there is no image called v3 in the repository. Let's roll back to our previously working version.

We'll use the rollout undo command:

$ kubectl rollout undo deployment/hello-node-deployment   
deployment.apps/hello-node-deployment rolled back

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
NAME                                    READY   STATUS    RESTARTS   AGE
hello-node-deployment-699574469-6nn4l   1/1     Running   0          2m6s
hello-node-deployment-699574469-k7tk2   1/1     Running   0          13s
hello-node-deployment-699574469-rzsqc   1/1     Running   0          2m6s
hello-node-deployment-699574469-w7dmn   1/1     Running   0          2m6s

$ kubectl describe pods hello-node-deployment-699574469-6nn4l 
Name:         hello-node-deployment-699574469-6nn4l
Namespace:    default
...
Status:       Running
IP:           172.17.0.9
IPs:
  IP:           172.17.0.9
Controlled By:  ReplicaSet/hello-node-deployment-699574469
Containers:
  hello-node:
    Container ID:   docker://a833a6b565e5df4d98cea8fddd0990d76367db6f53bed142911a87279b89be4a
    Image:          dockerbogo/echoserver:v2
...

$ kubectl get deployment
NAME                    READY     UP-TO-DATE   AVAILABLE   AGE
hello-node-deployment   4/4       4            4           5m48s



Docker & K8s

  1. Docker install on Amazon Linux AMI
  2. Docker install on EC2 Ubuntu 14.04
  3. Docker container vs Virtual Machine
  4. Docker install on Ubuntu 14.04
  5. Docker Hello World Application
  6. Nginx image - share/copy files, Dockerfile
  7. Working with Docker images : brief introduction
  8. Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
  9. More on docker run command (docker run -it, docker run --rm, etc.)
  10. Docker Networks - Bridge Driver Network
  11. Docker Persistent Storage
  12. File sharing between host and container (docker run -d -p -v)
  13. Linking containers and volume for datastore
  14. Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
  15. Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
  16. Dockerfile - Build Docker images automatically III - RUN
  17. Dockerfile - Build Docker images automatically IV - CMD
  18. Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
  19. Docker - Apache Tomcat
  20. Docker - NodeJS
  21. Docker - NodeJS with hostname
  22. Docker Compose - NodeJS with MongoDB
  23. Docker - Prometheus and Grafana with Docker-compose
  24. Docker - StatsD/Graphite/Grafana
  25. Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
  26. Docker : NodeJS with GCP Kubernetes Engine
  27. Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
  28. Docker : Jenkins Master and Slave
  29. Docker - ELK : ElasticSearch, Logstash, and Kibana
  30. Docker - ELK 7.6 : Elasticsearch on Centos 7
  31. Docker - ELK 7.6 : Filebeat on Centos 7
  32. Docker - ELK 7.6 : Logstash on Centos 7
  33. Docker - ELK 7.6 : Kibana on Centos 7
  34. Docker - ELK 7.6 : Elastic Stack with Docker Compose
  35. Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
  36. Docker - Deploy Elastic Stack via Helm on minikube
  37. Docker Compose - A gentle introduction with WordPress
  38. Docker Compose - MySQL
  39. MEAN Stack app on Docker containers : micro services
  40. MEAN Stack app on Docker containers : micro services via docker-compose
  41. Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
  42. Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
  43. Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
  44. Docker Compose with two containers - Flask REST API service container and an Apache server container
  45. Docker compose : Nginx reverse proxy with multiple containers
  46. Docker & Kubernetes : Envoy - Getting started
  47. Docker & Kubernetes : Envoy - Front Proxy
  48. Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
  49. Docker Packer
  50. Docker Cheat Sheet
  51. Docker Q & A #1
  52. Kubernetes Q & A - Part I
  53. Kubernetes Q & A - Part II
  54. Docker - Run a React app in a docker
  55. Docker - Run a React app in a docker II (snapshot app with nginx)
  56. Docker - NodeJS and MySQL app with React in a docker
  57. Docker - Step by Step NodeJS and MySQL app with React - I
  58. Installing LAMP via puppet on Docker
  59. Docker install via Puppet
  60. Nginx Docker install via Ansible
  61. Apache Hadoop CDH 5.8 Install with QuickStarts Docker
  62. Docker - Deploying Flask app to ECS
  63. Docker Compose - Deploying WordPress to AWS
  64. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
  65. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
  66. Docker - ECS Fargate
  67. Docker - AWS ECS service discovery with Flask and Redis
  68. Docker & Kubernetes : minikube
  69. Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
  70. Docker & Kubernetes 3 : minikube Django with Redis and Celery
  71. Docker & Kubernetes 4 : Django with RDS via AWS Kops
  72. Docker & Kubernetes : Kops on AWS
  73. Docker & Kubernetes : Ingress controller on AWS with Kops
  74. Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
  75. Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
  76. Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
  77. Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
  78. Docker & Kubernetes : DaemonSet
  79. Docker & Kubernetes : Secrets
  80. Docker & Kubernetes : kubectl command
  81. Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
  82. Docker & Kubernetes : Configure a Pod to Use a ConfigMap
  83. AWS : EKS (Elastic Container Service for Kubernetes)
  84. Docker & Kubernetes : Run a React app in a minikube
  85. Docker & Kubernetes : Minikube install on AWS EC2
  86. Docker & Kubernetes : Cassandra with a StatefulSet
  87. Docker & Kubernetes : Terraform and AWS EKS
  88. Docker & Kubernetes : Pods and Service definitions
  89. Docker & Kubernetes : Service IP and the Service Type
  90. Docker & Kubernetes : Kubernetes DNS with Pods and Services
  91. Docker & Kubernetes : Headless service and discovering pods
  92. Docker & Kubernetes : Scaling and Updating application
  93. Docker & Kubernetes : Horizontal pod autoscaler on minikubes
  94. Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
  95. Docker & Kubernetes : Rolling updates
  96. Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
  97. Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
  98. Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
  99. Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
  100. Docker & Kubernetes : MongoDB / MongoExpress on Minikube
  101. Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
  102. Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
  103. Docker & Kubernetes : Nginx Ingress Controller on Minikube
  104. Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
  105. Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
  106. Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
  107. Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
  108. Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
  109. Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
  110. Docker & Kubernetes : StatefulSets on minikube
  111. Docker & Kubernetes : RBAC
  112. Docker & Kubernetes Service Account, RBAC, and IAM
  113. Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
  114. Docker & Kubernetes : Helm Chart
  115. Docker & Kubernetes : My first Helm deploy
  116. Docker & Kubernetes : Readiness and Liveness Probes
  117. Docker & Kubernetes : Helm chart repository with Github pages
  118. Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
  119. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
  120. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
  121. Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
  122. Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
  123. Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
  124. Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
  125. Docker & Kubernetes : Istio on EKS
  126. Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
  127. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
  128. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
  129. Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
  130. Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
  131. Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
  132. Docker & Kubernetes : Spinnaker on EKS with Halyard
  133. Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
  134. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
  135. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
  136. Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
  137. Docker & Kubernetes : Jenkins-X on EKS
  138. Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
  139. Docker & Kubernetes : ArgoCD on Kubernetes cluster
  140. Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook


Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Docker & K8s



Docker install on Amazon Linux AMI

Docker install on EC2 Ubuntu 14.04

Docker container vs Virtual Machine

Docker install on Ubuntu 14.04

Docker Hello World Application

Nginx image - share/copy files, Dockerfile

Working with Docker images : brief introduction

Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

More on docker run command (docker run -it, docker run --rm, etc.)

Docker Networks - Bridge Driver Network

Docker Persistent Storage

File sharing between host and container (docker run -d -p -v)

Linking containers and volume for datastore

Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

Dockerfile - Build Docker images automatically III - RUN

Dockerfile - Build Docker images automatically IV - CMD

Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

Docker - Apache Tomcat

Docker - NodeJS

Docker - NodeJS with hostname

Docker Compose - NodeJS with MongoDB

Docker - Prometheus and Grafana with Docker-compose

Docker - StatsD/Graphite/Grafana

Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

Docker : NodeJS with GCP Kubernetes Engine

Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

Docker : Jenkins Master and Slave

Docker - ELK : ElasticSearch, Logstash, and Kibana

Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

Docker - ELK 7.6 : Logstash on Centos 7

Docker - ELK 7.6 : Kibana on Centos 7 Part 1

Docker - ELK 7.6 : Kibana on Centos 7 Part 2

Docker - ELK 7.6 : Elastic Stack with Docker Compose

Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

Docker - Deploy Elastic Stack via Helm on minikube

Docker Compose - A gentle introduction with WordPress

Docker Compose - MySQL

MEAN Stack app on Docker containers : micro services

Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

Docker Compose with two containers - Flask REST API service container and an Apache server container

Docker compose : Nginx reverse proxy with multiple containers

Docker compose : Nginx reverse proxy with multiple containers

Docker & Kubernetes : Envoy - Getting started

Docker & Kubernetes : Envoy - Front Proxy

Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

Docker Packer

Docker Cheat Sheet

Docker Q & A

Kubernetes Q & A - Part I

Kubernetes Q & A - Part II

Docker - Run a React app in a docker

Docker - Run a React app in a docker II (snapshot app with nginx)

Docker - NodeJS and MySQL app with React in a docker

Docker - Step by Step NodeJS and MySQL app with React - I

Installing LAMP via puppet on Docker

Docker install via Puppet

Nginx Docker install via Ansible

Apache Hadoop CDH 5.8 Install with QuickStarts Docker

Docker - Deploying Flask app to ECS

Docker Compose - Deploying WordPress to AWS

Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

Docker - ECS Fargate

Docker - AWS ECS service discovery with Flask and Redis

Docker & Kubernetes: minikube version: v1.31.2, 2023

Docker & Kubernetes 1 : minikube

Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

Docker & Kubernetes 3 : minikube Django with Redis and Celery

Docker & Kubernetes 4 : Django with RDS via AWS Kops

Docker & Kubernetes : Kops on AWS

Docker & Kubernetes : Ingress controller on AWS with Kops

Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

Docker & Kubernetes : DaemonSet

Docker & Kubernetes : Secrets

Docker & Kubernetes : kubectl command

Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

Docker & Kubernetes : Configure a Pod to Use a ConfigMap

AWS : EKS (Elastic Container Service for Kubernetes)

Docker & Kubernetes : Run a React app in a minikube

Docker & Kubernetes : Minikube install on AWS EC2

Docker & Kubernetes : Cassandra with a StatefulSet

Docker & Kubernetes : Terraform and AWS EKS

Docker & Kubernetes : Pods and Service definitions

Docker & Kubernetes : Headless service and discovering pods

Docker & Kubernetes : Service IP and the Service Type

Docker & Kubernetes : Kubernetes DNS with Pods and Services

Docker & Kubernetes - Scaling and Updating application

Docker & Kubernetes : Horizontal pod autoscaler on minikubes

Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

Docker & Kubernetes : Rolling updates

Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

Docker & Kubernetes : Nginx Ingress Controller on minikube

Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

Docker & Kubernetes : MongoDB / MongoExpress on Minikube

Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : RBAC

Docker & Kubernetes Service Account, RBAC, and IAM

Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

Docker & Kubernetes : Helm Chart

Docker & Kubernetes : My first Helm deploy

Docker & Kubernetes : Readiness and Liveness Probes

Docker & Kubernetes : Helm chart repository with Github pages

Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

Docker & Kubernetes : Istio on EKS

Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

Docker & Kubernetes : Spinnaker on EKS with Halyard

Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

Docker & Kubernetes : Jenkins-X on EKS

Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

Docker & Kubernetes : ArgoCD on Kubernetes cluster

Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook




Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Ansible 2.0



What is Ansible?

Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

SSH connection & running commands

Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

Modules

Playbooks

Handlers

Roles

Playbook for LAMP HAProxy

Installing Nginx on a Docker container

AWS : Creating an ec2 instance & adding keys to authorized_keys

AWS : Auto Scaling via AMI

AWS : creating an ELB & registers an EC2 instance from the ELB

Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

Setting up Apache web server

Deploying a Go app to Minikube

Ansible with Terraform





Terraform



Introduction to Terraform with AWS elb & nginx

Terraform Tutorial - terraform format(tf) and interpolation(variables)

Terraform Tutorial - user_data

Terraform Tutorial - variables

Terraform 12 Tutorial - Loops with count, for_each, and for

Terraform Tutorial - creating multiple instances (count, list type and element() function)

Terraform Tutorial - State (terraform.tfstate) & terraform import

Terraform Tutorial - Output variables

Terraform Tutorial - Destroy

Terraform Tutorial - Modules

Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

Terraform Tutorial - AWS ASG and Modules

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

Terraform Tutorial - AWS ECS using Fargate : Part I

Hashicorp Vault

HashiCorp Vault Agent

HashiCorp Vault and Consul on AWS with Terraform

Ansible with Terraform

AWS IAM user, group, role, and policies - part 1

AWS IAM user, group, role, and policies - part 2

Delegate Access Across AWS Accounts Using IAM Roles

AWS KMS

terraform import & terraformer import

Terraform commands cheat sheet

Terraform Cloud

Terraform 14

Creating Private TLS Certs





DevOps



Phases of Continuous Integration

Software development methodology

Introduction to DevOps

Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

Artifact repository and repository management

Linux - General, shell programming, processes & signals ...

RabbitMQ...

MariaDB

New Relic APM with NodeJS : simple agent setup on AWS instance

Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

Datadog - Monitoring with PagerDuty/HipChat and APM

Install and Configure Mesos Cluster

Cassandra on a Single-Node Cluster

Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

OpenStack install on Ubuntu 16.04 server - DevStack

AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

CI/CD with CircleCI - Heroku deploy

Introduction to Terraform with AWS elb & nginx

Docker & Kubernetes

Kubernetes I - Running Kubernetes Locally via Minikube

Kubernetes II - kops on AWS

Kubernetes III - kubeadm on AWS

AWS : EKS (Elastic Container Service for Kubernetes)

CI/CD Github actions

CI/CD Gitlab



DevOps / Sys Admin Q & A



(1A) - Linux Commands

(1B) - Linux Commands

(2) - Networks

(2B) - Networks

(3) - Linux Systems

(4) - Scripting (Ruby/Shell)

(5) - Configuration Management

(6) - AWS VPC setup (public/private subnets with NAT)

(6B) - AWS VPC Peering

(7) - Web server

(8) - Database

(9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

(10) - Trouble Shooting: Load, Throughput, Response time and Leaks

(11) - SSH key pairs, SSL Certificate, and SSL Handshake

(12) - Why is the database slow?

(13) - Is my web site down?

(14) - Is my server down?

(15) - Why is the server sluggish?

(16A) - Serving multiple domains using Virtual Hosts - Apache

(16B) - Serving multiple domains using server block - Nginx

(16C) - Reverse proxy servers and load balancers - Nginx

(17) - Linux startup process

(18) - phpMyAdmin with Nginx virtual host as a subdomain

(19) - How to SSH login without password?

(20) - Log Rotation

(21) - Monitoring Metrics

(22) - lsof

(23) - Wireshark introduction

(24) - User account management

(25) - Domain Name System (DNS)

(26) - NGINX SSL/TLS, Caching, and Session

(27) - Troubleshooting 5xx server errors

(28) - Linux Systemd: journalctl

(29) - Linux Systemd: FirewallD

(30) - Linux: SELinux

(31) - Linux: Samba

(0) - Linux Sys Admin's Day to Day tasks





Jenkins



Install

Configuration - Manage Jenkins - security setup

Adding job and build

Scheduling jobs

Managing_plugins

Git/GitHub plugins, SSH keys configuration, and Fork/Clone

JDK & Maven setup

Build configuration for GitHub Java application with Maven

Build Action for GitHub Java application with Maven - Console Output, Updating Maven

Commit to changes to GitHub & new test results - Build Failure

Commit to changes to GitHub & new test results - Successful Build

Adding code coverage and metrics

Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

Jenkins on EC2 - Creating a Maven project

Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

Jenkins on EC2 - Line Coverage with JaCoCo plugin

Setting up Master and Slave nodes

Jenkins Build Pipeline & Dependency Graph Plugins

Jenkins Build Flow Plugin

Pipeline Jenkinsfile with Classic / Blue Ocean

Jenkins Setting up Slave nodes on AWS

Jenkins Q & A





Puppet



Puppet with Amazon AWS I - Puppet accounts

Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

Puppet with Amazon AWS III - Puppet running Hello World

Puppet Code Basics - Terminology

Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

Puppet master /agent ubuntu 14.04 install on EC2 nodes

Puppet master post install tasks - master's names and certificates setup,

Puppet agent post install tasks - configure agent, hostnames, and sign request

EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

EC2 Puppet - Install lamp with a manifest ('puppet apply')

EC2 Puppet - Install lamp with a module

Puppet variable scope

Puppet packages, services, and files

Puppet packages, services, and files II with nginx Puppet templates

Puppet creating and managing user accounts with SSH access

Puppet Locking user accounts & deploying sudoers file

Puppet exec resource

Puppet classes and modules

Puppet Forge modules

Puppet Express

Puppet Express 2

Puppet 4 : Changes

Puppet --configprint

Puppet with Docker

Puppet 6.0.2 install on Ubuntu 18.04





Chef



What is Chef?

Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

Setting up Hosted Chef server

VirtualBox via Vagrant with Chef client provision

Creating and using cookbooks on a VirtualBox node

Chef server install on Ubuntu 14.04

Chef workstation setup on EC2 Ubuntu 14.04

Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





Elasticsearch search engine, Logstash, and Kibana



Elasticsearch, search engine

Logstash with Elasticsearch

Logstash, Elasticsearch, and Kibana 4

Elasticsearch with Redis broker and Logstash Shipper and Indexer

Samples of ELK architecture

Elasticsearch indexing performance



Vagrant



VirtualBox & Vagrant install on Ubuntu 14.04

Creating a VirtualBox using Vagrant

Provisioning

Networking - Port Forwarding

Vagrant Share

Vagrant Rebuild & Teardown

Vagrant & Ansible





Big Data & Hadoop Tutorials



Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

Hadoop - Running MapReduce Job

Hadoop - Ecosystem

CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

CDH5 APIs

QuickStart VMs for CDH 5.3

QuickStart VMs for CDH 5.3 II - Testing with wordcount

QuickStart VMs for CDH 5.3 II - Hive DB query

Scheduled start and stop CDH services

CDH 5.8 Install with QuickStarts Docker

Zookeeper & Kafka Install

Zookeeper & Kafka - single node single broker

Zookeeper & Kafka - Single node and multiple brokers

OLTP vs OLAP

Apache Hadoop Tutorial I with CDH - Overview

Apache Hadoop Tutorial II with CDH - MapReduce Word Count

Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

Apache Hadoop (CDH 5) Hive Introduction

CDH5 - Hive Upgrade to 1.3 to from 1.2

Apache Hive 2.1.0 install on Ubuntu 16.04

Apache HBase in Pseudo-Distributed mode

Creating HBase table with HBase shell and HUE

Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

Creating HBase table with Java API

HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

Flume with CDH5: a single-node Flume deployment (telnet example)

Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

List of Apache Hadoop hdfs commands

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

Apache Hadoop : Creating Wordcount Maven Project with Eclipse

Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

Spark 1.2 using VirtualBox and QuickStart VM - wordcount

Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

Apache Spark 2.0.2 tutorial with PySpark : RDD

Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

Apache Spark Streaming with Kafka and Cassandra

Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

Apache Spark 1.2 Streaming

Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

Apache Drill - Query File System, JSON, and Parquet

Apache Drill - HBase query

Apache Drill - Hive query

Apache Drill - MongoDB query





Redis In-Memory Database



Redis vs Memcached

Redis 3.0.1 Install

Setting up multiple server instances on a Linux host

Redis with Python

ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer



GCP (Google Cloud Platform)



GCP: Creating an Instance

GCP: gcloud compute command-line tool

GCP: Deploying Containers

GCP: Kubernetes Quickstart

GCP: Deploying a containerized web application via Kubernetes

GCP: Django Deploy via Kubernetes I (local)

GCP: Django Deploy via Kubernetes II (GKE)





AWS (Amazon Web Services)



AWS : EKS (Elastic Container Service for Kubernetes)

AWS : Creating a snapshot (cloning an image)

AWS : Attaching Amazon EBS volume to an instance

AWS : Adding swap space to an attached volume via mkswap and swapon

AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

AWS : Creating an instance to a new region by copying an AMI

AWS : S3 (Simple Storage Service) 1

AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

AWS : S3 (Simple Storage Service) 4 - Uploading a large file

AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

AWS : Creating a CloudFront distribution with an Amazon S3 origin

AWS : Creating VPC with CloudFormation

WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

AWS : CloudWatch & Logs with Lambda Function / S3

AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

AWS : Lambda and SNS - cross account

AWS : CLI (Command Line Interface)

AWS : CLI (ECS with ALB & autoscaling)

AWS : ECS with cloudformation and json task definition

AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

AWS : Load Balancing with HAProxy (High Availability Proxy)

AWS : VirtualBox on EC2

AWS : NTP setup on EC2

AWS: jq with AWS

AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

AWS : OpenVPN Access Server 2 Install

AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

AWS : Autoscaling group (ASG)

AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

AWS : Adding a SSH User Account on Linux Instance

AWS : Windows Servers - Remote Desktop Connections using RDP

AWS : Scheduled stopping and starting an instance - python & cron

AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

AWS : Elastic Beanstalk with NodeJS

AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

AWS : Identity and Access Management (IAM) Roles for Amazon EC2

AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

AWS : Identity and Access Management (IAM) sts assume role via aws cli2

AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

AWS : Amazon Route 53

AWS : Amazon Route 53 - DNS (Domain Name Server) setup

AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

AWS Amazon Route 53 : Private Hosted Zone

AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

AWS : Lambda with AWS CloudTrail

AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

AWS : Redshift data warehouse

AWS : CloudFormation - templates, change sets, and CLI

AWS : CloudFormation Bootstrap UserData/Metadata

AWS : CloudFormation - Creating an ASG with rolling update

AWS : Cloudformation Cross-stack reference

AWS : OpsWorks

AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

AWS CodeDeploy : Deploy an Application from GitHub

AWS EC2 Container Service (ECS)

AWS EC2 Container Service (ECS) II

AWS Hello World Lambda Function

AWS Lambda Function Q & A

AWS Node.js Lambda Function & API Gateway

AWS API Gateway endpoint invoking Lambda function

AWS API Gateway invoking Lambda function with Terraform

AWS API Gateway invoking Lambda function with Terraform - Lambda Container

Amazon Kinesis Streams

Kinesis Data Firehose with Lambda and ElasticSearch

Amazon DynamoDB

Amazon DynamoDB with Lambda and CloudWatch

Loading DynamoDB stream to AWS Elasticsearch service with Lambda

Amazon ML (Machine Learning)

Simple Systems Manager (SSM)

AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

AWS : RDS Importing and Exporting SQL Server Data

AWS : RDS PostgreSQL & pgAdmin III

AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

AWS : MySQL Replication : Master-slave

AWS : MySQL backup & restore

AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

AWS : Restoring Postgres on EC2 instance from S3 backup

AWS : Q & A

AWS : Security

AWS : Security groups vs. network ACLs

AWS : Scaling-Up

AWS : Networking

AWS : Single Sign-on (SSO) with Okta

AWS : JIT (Just-in-Time) with Okta





Powershell 4 Tutorial



Powersehll : Introduction

Powersehll : Help System

Powersehll : Running commands

Powersehll : Providers

Powersehll : Pipeline

Powersehll : Objects

Powershell : Remote Control

Windows Management Instrumentation (WMI)

How to Enable Multiple RDP Sessions in Windows 2012 Server

How to install and configure FTP server on IIS 8 in Windows 2012 Server

How to Run Exe as a Service on Windows 2012 Server

SQL Inner, Left, Right, and Outer Joins





Git/GitHub Tutorial



One page express tutorial for GIT and GitHub

Installation

add/status/log

commit and diff

git commit --amend

Deleting and Renaming files

Undoing Things : File Checkout & Unstaging

Reverting commit

Soft Reset - (git reset --soft <SHA key>)

Mixed Reset - Default

Hard Reset - (git reset --hard <SHA key>)

Creating & switching Branches

Fast-forward merge

Rebase & Three-way merge

Merge conflicts with a simple example

GitHub Account and SSH

Uploading to GitHub

GUI

Branching & Merging

Merging conflicts

GIT on Ubuntu and OS X - Focused on Branching

Setting up a remote repository / pushing local project and cloning the remote repo

Fork vs Clone, Origin vs Upstream

Git/GitHub Terminologies

Git/GitHub via SourceTree II : Branching & Merging

Git/GitHub via SourceTree III : Git Work Flow

Git/GitHub via SourceTree IV : Git Reset

Git wiki - quick command reference






Subversion

Subversion Install On Ubuntu 14.04

Subversion creating and accessing I

Subversion creating and accessing II








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master