BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Kubernetes Q and A - Part I

Docker_Icon.png Kubernetes-Icon.png




Bookmark and Share





bogotobogo.com site search:






Kubernetes Q and A, Part 1

  1. Describe the steps from packing container images to running containers.
    To run an application in Kubernetes, we first need to package it up into one or more container images, push those images to an image registry, and then post a description of our app to the Kubernetes API server.
    The description includes information such as the container image or images that contain our application components, how those components are related to each other, and which ones need to be run co-located (together on the same node) and which don’t.
    For each component, we can also specify how many replicas we want to run. Additionally, the description also includes which of those components provide a service to either internal or external clients and should be exposed through a single IP address and made discoverable to the other components.
    When the API server processes our app's description, the Scheduler schedules the specified groups of containers onto the available worker nodes based on computational resources required by each group and the unallocated resources on each node at that moment.
    The Kubelet on those nodes then instructs the Container Runtime (Docker, rkt) to pull the required container images and run the containers.

  2. Why do we even need pods? Why can't we use containers directly?
    Containers are designed to run only a single process per container (unless the process itself spawns child processes). If we run multiple unrelated processes in a single container, it is our responsibility to keep all those processes running, manage their logs, and so on.
    For example, we'd have to include a mechanism for automatically restarting individual processes if they crash. Also, all those processes would log to the same standard output, so we'd have a hard time figuring out what process logged what.
    Therefore, we need to run each process in its own container. That's how Docker and Kubernetes are meant to be used.
    All containers of a pod run under the same Network (so containers share network interaces hence they share the same IP address and port space) and UTS (UNIX Time Sharing) namespaces (share the same hsotname).
    Because containers in a pod run in the same Network namespace, which means processes running in containers of the same pod need to take care not to bind to the same port numbers, or they'll run into port conflicts.
    All the containers in a pod also have the same loopback network interface, so a container can communicate with other containers in the same pod through localhost.

  3. Create a simple YAML descriptor for a pod and then create a pod.

    Here is a pod descriptor file bogo-manual.yaml:

    apiVersion: v1               
    kind: Pod                    
    metadata:
      name: bogo-manual         
    spec:
      containers:
      - image: dockerbogo/bogo       
        name: bogo              
        ports:
        - containerPort: 8080    
          protocol: TCP    
    

    It conforms to the v1 version of the Kubernetes API. The type of resource we're describing is a pod, with the name bogo-manual. The pod consists of a single container based on the dockerbogo/bogo image. The container is given a name and it's listening on port 8080.

    We can use kubectl explain pods to get descriptions about pods:

    $ kubectl explain pods  
    KIND:     Pod
    VERSION:  v1
    
    DESCRIPTION:
         Pod is a collection of containers that can run on a host. This resource is
         created by clients and scheduled onto hosts.
    
    FIELDS:
       apiVersion <string>
         APIVersion defines the versioned schema of this representation of an
         object. Servers should convert recognized schemas to the latest internal
         value, and may reject unrecognized values. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
    
       kind	<string>
         Kind is a string value representing the REST resource this object
         represents. Servers may infer this from the endpoint the client submits
         requests to. Cannot be updated. In CamelCase. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
    
       metadata <Object>
         Standard object's metadata. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
    
       spec <Object>
         Specification of the desired behavior of the pod. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    
       status <Object>
         Most recently observed status of the pod. This data may not be up to date.
         Populated by the system. Read-only. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    

    We can then drill deeper to find out more about each attribute, for example, pod.spec attribute, with kubectl explain pod.spec command:

    $ kubectl explain pod.spec
    KIND:     Pod
    VERSION:  v1
    
    RESOURCE: spec <Object>
    
    DESCRIPTION:
         Specification of the desired behavior of the pod. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    
         PodSpec is a description of a pod.
         ...
    

    To create the pod from our YAML file, we need to use the kubectl create command:

    $ kubectl create -f bogo-manual.yaml
    pod/bogo-manual created
    
    $ kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    bogo-manual   1/1     Running   0          1m
    

    After creating the pod, we can ask Kubernetes for the full YAML of the pod. We'll see it's similar to the YAML we saw earlier but the additional fields appears in the returned definition. Go ahead and use the following command to see the full descriptor of the pod:

    $ kubectl get pod bogo-manual -o yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: "2021-03-20T21:10:38Z"
      managedFields:
      ...    
    

    To get json instead of yaml, we want to use kubectl get po kubia-manual -o json.

  4. How can we talk to a specific pod without going through a service?
    Ans: we can use kubectl port-forward proxy running on localhost:8888.

    The pod is now running:
    $ kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    bogo-manual   1/1     Running   0          16m    
    

    But how do we can see it in action?
    We can use the kubectl expose command to create a service to gain access to the pod externally. And we have other ways of connecting to a pod for testing and debugging purposes. One of them is through kubectl port-forward command. The following command will forward our machine's local port 8888 to port 8080 of our bogo-manual pod:

    $ kubectl port-forward bogo-manual 8888:8080
    Forwarding from 127.0.0.1:8888 -> 8080
    Forwarding from [::1]:8888 -> 8080
    

    The port forwarder is running and we can now connect to our pod through the local port.

    In a different terminal, we can now use curl to send an HTTP request to our pod through the kubectl port-forward proxy running on localhost:8888:

    $ curl localhost:8888
    You've hit bogo-manual
    

    Using port forwarding like this is an effective way to test an individual pod.

  5. Create/delete a pod with labels.
    We want to create a new pod with two labels using bogo-manual-with-labels.yaml:
    apiVersion: v1
    kind: Pod
    metadata:
      name: bogo-manual-v2
      labels:
        creation_method: manual
        env: prod
    spec:
      containers:
      - image: dockerbogo/bogo
        name: bogo
        ports:
        - containerPort: 8080
          protocol: TCP    
    

    We've included the labels creation_method=manual and env=prod in the metadata.labels section. Let's create it:

    $ kubectl create -f bogo-manual-with-labels.yaml
    pod/bogo-manual-v2 created
    
    $ kubectl get pods
    NAME             READY   STATUS    RESTARTS   AGE
    bogo-manual      1/1     Running   0          148m
    bogo-manual-v2   1/1     Running   0          11m
    

    Instead of listing all labels, if we're only interested in certain labels, we can specify them with the -L switch and have each displayed in its own column. List pods again and show the columns for the two labels we attached to our bogo-manual-v2 pod:

    $ kubectl get pods -L creation_method,env
    NAME             READY   STATUS    RESTARTS   AGE    CREATION_METHOD   ENV
    bogo-manual      1/1     Running   0          153m                     
    bogo-manual-v2   1/1     Running   0          16m    manual            prod    
    

    To list pods using a label selector:
    $ kubectl get pods
    NAME             READY   STATUS    RESTARTS   AGE
    bogo-manual      1/1     Running   0          3h21m
    bogo-manual-v2   1/1     Running   0          64m
    
    $ kubectl get pod -l creation_method=manual
    NAME             READY   STATUS    RESTARTS   AGE
    bogo-manual-v2   1/1     Running   0          64m    
    

    To list all pods that include the env label, whatever its value is:
    $ kubectl get pod -l env
    NAME             READY   STATUS    RESTARTS   AGE
    bogo-manual-v2   1/1     Running   0          67m    
    

    To list pods that don't have the env label:
    $ kubectl get pod -l '!env'
    NAME          READY   STATUS    RESTARTS   AGE
    bogo-manual   1/1     Running   0          3h28m
    
    $ kubectl get pod --show-labels
    NAME             READY   STATUS    RESTARTS   AGE     LABELS
    bogo-manual      1/1     Running   0          6h8m    <none>
    bogo-manual-v2   1/1     Running   0          3h51m   creation_method=manual,env=prod
    
    $ kubectl delete pod -l env=prod
    pod "bogo-manual-v2" deleted
    
    $ kubectl get pod --show-labels
    NAME             READY   STATUS    RESTARTS   AGE     LABELS
    bogo-manual      1/1     Running   0          6h8m    <none>
    



  6. What are namespaces?
    Using multiple namespaces allows us to split complex systems with numerous components into smaller distinct groups.
    They can also be used for separating resources in a multi-tenant environment, splitting up resources into pro, dev, and QA environments.
    To list all namespaces in the cluster:
    $ kubectl get ns
    NAME              STATUS   AGE
    default           Active   20h
    kube-node-lease   Active   20h
    kube-public       Active   20h
    kube-system       Active   20h     
    

    Up until noew, we've operated only in the default namespace. When listing resources with the kubectl get command, we did not specify the namespace explicitly, so kubectl always defaulted to the default namespace, showing us only the objects in that namespace.
    But as we can see from the list, the kube-public and the kube-system namespaces also exist.

    To look at the pods that belong to the kube-system namespace, we need to tell kubectl to list pods in that namespace only:
    $ kubectl get pods --namespace kube-system
    NAME                               READY   STATUS    RESTARTS   AGE
    coredns-f9fd979d6-9xcsw            1/1     Running   2          20h
    etcd-minikube                      1/1     Running   1          4h13m
    kube-apiserver-minikube            1/1     Running   1          4h13m
    kube-controller-manager-minikube   1/1     Running   2          20h
    kube-proxy-nmsrh                   1/1     Running   2          20h
    kube-scheduler-minikube            1/1     Running   2          20h
    storage-provisioner                1/1     Running   5          20h    
    

    Note that we can also use -n instead of --namespace.
    Namespaces enable us to separate resources that don't belong together into non-overlapping groups. If several users or groups of users are using the same Kubernetes cluster, and they each manage their own distinct set of resources, they should each use their own namespace.
    A namespace is a Kubernetes resource like any other, so we can create it by posting a YAML file to the Kubernetes API server.

    Let's create a bogo-namespace.yaml file with the following:
    apiVersion: v1
    kind: Namespace        
    metadata:
      name: bogo-namespace     
    

    Now, let's use kubectl to post the file to the Kubernetes API server:
    $ kubectl create -f bogo-namespace.yaml
    namespace/bogo-namespace created
    
    $ kubectl get ns
    NAME              STATUS   AGE
    bogo-namespace    Active   8s
    default           Active   20h
    kube-node-lease   Active   20h
    kube-public       Active   20h
    kube-system       Active   20h    
    

    We could have created the namespace using kubectl create namespace command without the yaml file:
    $ kubectl create namespace bogo-namespace   
    

    To create resources in the namespace we've created, either add a namespace: bogo-namespace entry to the metadata section, or specify the namespace when creating the resource with the kubectl create command:
    $ kubectl create -f bogo-manual.yaml -n bogo-namespace
    pod/bogo-manual created
    

    We now have two pods with the same name (bogo-manual). One is in the default namespace, and the other is in our bogo-namespace:
    $ kubectl get pods --all-namespaces
    NAMESPACE        NAME                               READY   STATUS    RESTARTS   AGE
    bogo-namespace   bogo-manual                        1/1     Running   0          95m
    default          bogo-manual                        1/1     Running   0          5h51m
    default          bogo-manual-v2                     1/1     Running   0          3h34m
    kube-system      coredns-f9fd979d6-9xcsw            1/1     Running   2          22h
    kube-system      etcd-minikube                      1/1     Running   1          6h16m
    kube-system      kube-apiserver-minikube            1/1     Running   1          6h16m
    kube-system      kube-controller-manager-minikube   1/1     Running   2          22h
    kube-system      kube-proxy-nmsrh                   1/1     Running   2          22h
    kube-system      kube-scheduler-minikube            1/1     Running   2          22h
    kube-system      storage-provisioner                1/1     Running   5          22h    
    

    We no longer need either the pods in that namespace, or the namespace itself. We can delete the whole namespace (the pods will be deleted along with the namespace automatically):
    $ kubectl delete ns bogo-namespace
    namespace "bogo-namespace" deleted
    

    (Note)
    We should know what namespaces don't provide, at least, not out of the box.
    Although namespaces allow us to isolate objects into distinct groups, they don't provide any kind of isolation of running objects.
    For example, we may think that when different users deploy pods across different namespaces, those pods are isolated from each other and can't communicate, but that's not necessarily the case.
    Whether namespaces provide network isolation depends on which networking solution is deployed with Kubernetes. When the solution doesn't provide inter-namespace network isolation, if a pod in namespace "foo" knows the IP address of a pod in namespace "bar", there is nothing preventing it from sending traffic, such as HTTP requests, to the other pod.

  7. What is ReplicationController?
    One of the main benefits of using Kubernetes we can keep our containers running in the cluster.
    But what if one of those containers dies? What if all containers of a pod die?
    As soon as a pod is scheduled to a node, the Kubelet on that node will run its containers and keep them running as long as the pod exists. If the container's main process crashes, the Kubelet will restart the container.
    A ReplicationController is a Kubernetes resource that ensures its pods are always kept running. If the pod disappears for any reason, the ReplicationController notices the missing pod and creates a replacement pod.

  8. ReplicaSet
    Initially, ReplicationControllers were the only Kubernetes component for replicating pods and rescheduling them when nodes failed. Later, a similar resource called a ReplicaSet was introduced and it’s a new generation of ReplicationController.
    We're going to create a ReplicaSet with the following yaml, bogo-replicaset.yaml:
    apiVersion: apps/v1
    kind: ReplicaSet
    metadata:
      name: bogo
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: bogo
      template:
        metadata:
          labels:
            app: bogo
        spec:
          containers:
          - name: bogo
            image: dockerbogo/bogo    
    

    The first thing to note is that ReplicaSet are not part of the v1 API, so we need to ensure us specify the proper apiVersion when creating the resource.
    We're creating a resource of type ReplicaSet:
    $ kubectl create -f bogo-replicaset.yaml
    replicaset.apps/bogo created    
    
    $ kubectl get rs
    NAME   DESIRED   CURRENT   READY   AGE
    bogo   3         3         3       85s
    
    $ kubectl describe rs
    Name:         bogo
    Namespace:    default
    Selector:     app=bogo
    Labels:       <none>
    Annotations:  <none>
    Replicas:     3 current / 3 desired
    Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:  app=bogo
      Containers:
       bogo:
        Image:        dockerbogo/bogo
        Port:         <none>
        Host Port:    <none>
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Events:
      Type    Reason            Age   From                   Message
      ----    ------            ----  ----                   -------
      Normal  SuccessfulCreate  51m   replicaset-controller  Created pod: bogo-g6x78
      Normal  SuccessfulCreate  51m   replicaset-controller  Created pod: bogo-hbdhw
      Normal  SuccessfulCreate  51m   replicaset-controller  Created pod: bogo-89h4f
    

    It’s showing it has three replicas matching the selector. If we list all the pods, we'll see they're still the same three pods we had before. The ReplicaSet didn't create any new ones.
    $ kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    bogo-89h4f    1/1     Running   0          54m
    bogo-g6x78    1/1     Running   0          54m
    bogo-hbdhw    1/1     Running   0          54m    
    

    We can delete the ReplicaSet to clean up our cluster:
    $ kubectl delete rs bogo
    replicaset.apps "bogo" deleted    
    

    Deleting the ReplicaSet should delete all the pods. List the pods to confirm that's the case:
    $ kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE    
    

  9. DaemonSets
    ReplicaSets are used for running a specific number of pods deployed anywhere in the Kubernetes cluster.
    But certain cases exist when we want a pod to run on each and every node in the cluster and each node needs to run exactly one instance of the pod.
    DaemonSets run only a single pod replica on each node while ReplicaSets scatter them around the whole cluster randomly.
    The use cases of the DaemonSets include infrastructure-related pods that perform system-level operations (such as a log collector and a resource monitor on every node).
    Another good example is Kubernetes' own kube-proxy process, which needs to run on all nodes to make services work.
    To run a pod on all cluster nodes, we create a DaemonSet object, which is much like a ReplicaSet, except that pods created by a DaemonSet already have a target node specified and skip the Kubernetes Scheduler. They aren't scattered around the cluster randomly.
    Whereas a ReplicaSet (or ReplicationController) makes sure that a desired number of pod replicas exist in the cluster, a DaemonSet doesn't have any notion of a desired replica count. It doesn't need it because its job is to ensure that a pod matching its pod selector is running on each node.
    If a node goes down, the DaemonSet doesn't cause the pod to be created elsewhere. But when a new node is added to the cluster, the DaemonSet immediately deploys a new pod instance to it.
    It also does the same if someone inadvertently deletes one of the pods, leaving the node without the DaemonSet's pod. Like a ReplicaSet, a DaemonSet creates the pod from the pod template configured in it.
    A DaemonSet deploys pods to all nodes in the cluster, unless we specify that the pods should only run on a subset of all the nodes. This is done by specifying the node-Selector property in the pod template, which is part of the DaemonSet definition similar to the pod template in a ReplicaSet.

    Let's create the DaemonSet using ssd-monitor-daemonset.yaml:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: ssd-monitor
    spec:
      selector:
        matchLabels:
          app: ssd-monitor
      template:
        metadata:
          labels:
            app: ssd-monitor
        spec:
          nodeSelector:
            disk: ssd
          containers:
          - name: main
            image: dockerbogo/ssd-monitor    
    
    $ kubectl create -f ssd-monitor-daemonset.yaml
    daemonset.apps/ssd-monitor created
    
    $ kubectl get ds
    NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    ssd-monitor   0         0         0       0            0           disk=ssd        3m28s
    

    Those zeroes indicate there's something wrong. Let's list pods:
    $ kubectl get pods
    No resources found in default namespace.    
    

    Where are the pods?
    Yes, we forgot to label our nodes with the disk=ssd label. Let's label it.
    The DaemonSet should detect that the nodes' labels have changed and deploy the pod to all nodes with a matching label.
    We need to know the node's name when labeling it:
    $ kubectl get node
    NAME       STATUS   ROLES    AGE   VERSION
    minikube   Ready    master   27h   v1.19.0    
    

    Now, we need to add the disk=ssd label to our nodes like this:
    $ kubectl label node minikube disk=ssd
    node/minikube labeled    
    

    The DaemonSet should have created one pod now:
    $ kubectl get pods
    NAME                READY   STATUS    RESTARTS   AGE
    ssd-monitor-jgfdn   1/1     Running   0          13s    
    

  10. Service resources - expose a group of pods to external clients
    A Kubernetes Service is a resource for a single entry point to a group of pods. Each service has an IP address and port that never change while the service exists.
    Clients can open connections to that IP and port, and those connections are then routed to one of the pods behind that service. This way, clients of a service don't need to know the location of pods providing the service, allowing those pods to be moved around the cluster at any time.
    The service address doesn't change even if the pod's IP address changes. Additionally, by creating the service, we also enable the pods to easily find the service by its name through either environment variables or DNS.

    A service can be backed by more than one pod and the connections to the service are load-balanced across all the backing pods.
    But how exactly do we define which pods are part of the service and which aren't?
    Though the easiest way to create a service is through kubectl expose, we'll create a service manually by posting a YAML to the Kubernetes API server.
    Here is our bogo-svc.yaml file:

    apiVersion: v1
    kind: Service
    metadata:
      name: bogo
    spec:
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: bogo    
    

    where the port this service will be available on, targetPort is the container port the service will forward to.
    All pods with the app=bogo label will be part of this service.
    Here we're defining a bogo service which will accept connections on port 80 and route each connection to port 8080 of one of the pods matching the app=bogo label selector.

    Let's create the service by posting the file using kubectl create:
    $ kubectl create -f bogo-svc.yaml
    service/bogo created
    
    $ kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    bogo         ClusterIP   10.108.115.229   <none>        80/TCP    7m20s
    kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   38h
    

    The list shows that the IP address assigned to the service is 10.111.23.135. Because this is the cluster IP, it's only accessible from inside the cluster.
    The primary purpose of services is exposing groups of pods to other pods in the cluster though we'll usually also want to expose services externally.

    Let's use the service from inside the cluster and see what it does.
    We can execute the curl command inside one of our existing pods through the kubectl exec command which allows us to remotely run arbitrary commands inside an existing container of a pod.
    $ kubectl create deployment bogo --image=dockerbogo/bogo
    deployment.apps/bogo created
    
    $ kubectl get pod --show-labels
    NAME                    READY   STATUS    RESTARTS   AGE    LABELS
    bogo-764645c96c-v5rzf   1/1     Running   0          168m   app=bogo,pod-template-hash=764645c96c
    
    
    $ kubectl exec bogo-764645c96c-v5rzf -- curl -s http://10.108.115.229
    
    $ kubectl logs bogo-764645c96c-v5rzf
    bogo server starting and listening on 8080...
    

    Note
    The curl from within the pod did not get the response from the service. Need an investigation.

    The double dash (--) in the command signals the end of command options for kubectl.
    Everything after the double dash is the command that should be executed inside the pod.
    Using the double dash isn't necessary if the command has no arguments that start with a dash. But in our case, if we don't use the double dash there, the -s option would be interpreted as an option for kubectl exec.

  11. How do pods find discover a service's IP and port? - Discovering services

    bogo-svc.yaml:
    apiVersion: v1
    kind: Service
    metadata:
      name: bogo
    spec:
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: bogo    
    

    $ kubectl create -f bogo-svc.yaml
    service/bogo created    
    

    bogo-replicaset.yaml:

    apiVersion: apps/v1
    kind: ReplicaSet
    metadata:
      name: bogo
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: bogo
      template:
        metadata:
          labels:
            app: bogo
        spec:
          containers:
          - name: bogo
            image: dockerbogo/bogo    
    

    $ kubectl create -f bogo-replicaset.yaml 
    replicaset.apps/bogo created    
    

    $ kubectl get pods
    NAME         READY   STATUS    RESTARTS   AGE
    bogo-8cktl   1/1     Running   0          47s
    bogo-bgx9z   1/1     Running   0          47s
    bogo-t9df8   1/1     Running   0          47s
    
    $ kubectl get svc
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    bogo         ClusterIP   10.108.125.207   <none>        80/TCP    2m56s
    kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d12h
    
    $ kubectl get rs
    NAME   DESIRED   CURRENT   READY   AGE
    bogo   3         3         3       65s    
    

    By creating a service, we now have a single and stable IP address and port that we can hit to access our pods. This address will remain unchanged throughout the whole lifetime of the service. Pods behind this service may come and go, their IPs may change, their number can go up or down, but they'll always be accessible through the service's single and constant IP address.
    But how do the client pods know the IP and port of a service?
    Each service gets a DNS entry in the internal DNS server running in a kube-dns pod, and client pods that know the name of the service can access it through its fully qualified domain name (FQDN).
    We'll try to access the bogo service through its FQDN instead of its IP and we'll do that inside an existing pod.
    $ kubectl get pods
    NAME         READY   STATUS    RESTARTS   AGE
    bogo-8cktl   1/1     Running   0          27m
    bogo-bgx9z   1/1     Running   0          27m
    bogo-t9df8   1/1     Running   0          27m
    
    $ kubectl exec -it bogo-8cktl -- bash
    root@bogo-8cktl:/#     
    

    We're now inside the container. We can use the curl command to access the bogo service in any of the following ways:
    root@bogo-8cktl:/# curl http://bogo.default.svc.cluster.local 
    You've hit bogo-bgx9z
    
    root@bogo-8cktl:/# curl http://bogo.default.svc
    You've hit bogo-bgx9z
    
    root@bogo-8cktl:/# curl http://bogo.default
    You've hit bogo-t9df8
    
    root@bogo-8cktl:/# curl http://bogo
    You've hit bogo-t9df8
    
    root@bogo-8cktl:/# for i in {1..5}; do curl http://bogo; done
    You've hit bogo-bgx9z
    You've hit bogo-t9df8
    You've hit bogo-bgx9z
    You've hit bogo-t9df8
    You've hit bogo-bgx9z
    

    We can hit our service by using the service's name as the hostname in the requested URL. We can omit the namespace and the svc.cluster.local suffix because of how the DNS resolver inside each pod's container is configured.
    Look at the /etc/resolv.conf file in the container:
    root@bogo-8cktl:/# cat /etc/resolv.conf
    nameserver 10.96.0.10
    search default.svc.cluster.local svc.cluster.local cluster.local
    options ndots:5
    

  12. Can't ping to a service. Why?
    What if, for whatever reason, we can't access your service?
    We'll most likely try to figure out what's wrong by entering an existing pod and trying to access the service.
    However, if we still can't access the service with a curl command, maybe then we'll try to ping the service IP to see if it's up.
    $ kubectl get pods
    NAME         READY   STATUS    RESTARTS   AGE
    bogo-8cktl   1/1     Running   0          161m
    bogo-bgx9z   1/1     Running   0          161m
    bogo-t9df8   1/1     Running   0          161m
    
    $ kubectl get svc
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    bogo         ClusterIP   10.108.125.207   <none>        80/TCP    174m
    kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d15h
    
    $ kubectl exec -it bogo-8cktl -- bash
    root@bogo-8cktl:/# 
    root@bogo-8cktl:/# curl bogo
    You've hit bogo-bgx9z
    
    root@bogo-8cktl:/# ping bogo
    PING bogo.default.svc.cluster.local (10.108.125.207): 56 data bytes
    ^C--- bogo.default.svc.cluster.local ping statistics ---
    31 packets transmitted, 0 packets received, 100% packet loss
    

    curl a service works, but ping it doesn't.
    That's because the service's cluster IP is a virtual IP. So, the IP only has meaning when combined with the service port.

  13. What is a Service endpoints object?
    Services don't link to pods directly. Instead, there is a resource sits in between: the Endpoints resource.
    $ kubectl describe svc bogo
    Name:              bogo
    Namespace:         default
    Labels:            <none>
    Annotations:       <none>
    Selector:          app=bogo
    Type:              ClusterIP
    IP:                10.108.125.207
    Port:              <unset>  80/TCP
    TargetPort:        8080/TCP
    Endpoints:         172.18.0.2:8080,172.18.0.3:8080,172.18.0.4:8080
    Session Affinity:  None
    Events:            <none>    
    

    where he service's pod selector is used to create the list of endpoints and an Endpoints resource is a list of IP addresses and ports exposing a service:
    $ kubectl get endpoints bogo
    NAME   ENDPOINTS                                         AGE
    bogo   172.18.0.2:8080,172.18.0.3:8080,172.18.0.4:8080   96m    
    

    apiVersion: v1
    kind: Service
    metadata:
      name: bogo
    spec:
      ports:
      - port: 80
        targetPort: 8080
      selector:
        app: bogo    
    

    Although the pod selector is defined in the service spec, it's not used directly when redirecting incoming connections.
    Instead, the selector is used to build a list of IPs and ports, which is then stored in the Endpoints resource.
    When a client connects to a service, the service proxy selects one of those IP and port pairs and redirects the incoming connection to the server listening at that location.

  14. Why Ingresses are needed?
    One important reason is that each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services.
    The downside of the LoadBalancer serviceis that each service we expose with a LoadBalancer will get its own IP address, and we have to pay for a LoadBalancer per exposed service, which can get expensive.
    When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
    Ingress is the most useful if we want to expose multiple services under the same IP address, and we only pay for one load balancer.
    Ingresses operate at the application layer of the network stack (HTTP) and can provide features such as cookie-based session affinity and the like, which services can't.












Docker & K8s

  1. Docker install on Amazon Linux AMI
  2. Docker install on EC2 Ubuntu 14.04
  3. Docker container vs Virtual Machine
  4. Docker install on Ubuntu 14.04
  5. Docker Hello World Application
  6. Nginx image - share/copy files, Dockerfile
  7. Working with Docker images : brief introduction
  8. Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
  9. More on docker run command (docker run -it, docker run --rm, etc.)
  10. Docker Networks - Bridge Driver Network
  11. Docker Persistent Storage
  12. File sharing between host and container (docker run -d -p -v)
  13. Linking containers and volume for datastore
  14. Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
  15. Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
  16. Dockerfile - Build Docker images automatically III - RUN
  17. Dockerfile - Build Docker images automatically IV - CMD
  18. Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
  19. Docker - Apache Tomcat
  20. Docker - NodeJS
  21. Docker - NodeJS with hostname
  22. Docker Compose - NodeJS with MongoDB
  23. Docker - Prometheus and Grafana with Docker-compose
  24. Docker - StatsD/Graphite/Grafana
  25. Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
  26. Docker : NodeJS with GCP Kubernetes Engine
  27. Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
  28. Docker : Jenkins Master and Slave
  29. Docker - ELK : ElasticSearch, Logstash, and Kibana
  30. Docker - ELK 7.6 : Elasticsearch on Centos 7
  31. Docker - ELK 7.6 : Filebeat on Centos 7
  32. Docker - ELK 7.6 : Logstash on Centos 7
  33. Docker - ELK 7.6 : Kibana on Centos 7
  34. Docker - ELK 7.6 : Elastic Stack with Docker Compose
  35. Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
  36. Docker - Deploy Elastic Stack via Helm on minikube
  37. Docker Compose - A gentle introduction with WordPress
  38. Docker Compose - MySQL
  39. MEAN Stack app on Docker containers : micro services
  40. MEAN Stack app on Docker containers : micro services via docker-compose
  41. Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
  42. Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
  43. Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
  44. Docker Compose with two containers - Flask REST API service container and an Apache server container
  45. Docker compose : Nginx reverse proxy with multiple containers
  46. Docker & Kubernetes : Envoy - Getting started
  47. Docker & Kubernetes : Envoy - Front Proxy
  48. Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
  49. Docker Packer
  50. Docker Cheat Sheet
  51. Docker Q & A #1
  52. Kubernetes Q & A - Part I
  53. Kubernetes Q & A - Part II
  54. Docker - Run a React app in a docker
  55. Docker - Run a React app in a docker II (snapshot app with nginx)
  56. Docker - NodeJS and MySQL app with React in a docker
  57. Docker - Step by Step NodeJS and MySQL app with React - I
  58. Installing LAMP via puppet on Docker
  59. Docker install via Puppet
  60. Nginx Docker install via Ansible
  61. Apache Hadoop CDH 5.8 Install with QuickStarts Docker
  62. Docker - Deploying Flask app to ECS
  63. Docker Compose - Deploying WordPress to AWS
  64. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
  65. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
  66. Docker - ECS Fargate
  67. Docker - AWS ECS service discovery with Flask and Redis
  68. Docker & Kubernetes : minikube
  69. Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
  70. Docker & Kubernetes 3 : minikube Django with Redis and Celery
  71. Docker & Kubernetes 4 : Django with RDS via AWS Kops
  72. Docker & Kubernetes : Kops on AWS
  73. Docker & Kubernetes : Ingress controller on AWS with Kops
  74. Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
  75. Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
  76. Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
  77. Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
  78. Docker & Kubernetes : DaemonSet
  79. Docker & Kubernetes : Secrets
  80. Docker & Kubernetes : kubectl command
  81. Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
  82. Docker & Kubernetes : Configure a Pod to Use a ConfigMap
  83. AWS : EKS (Elastic Container Service for Kubernetes)
  84. Docker & Kubernetes : Run a React app in a minikube
  85. Docker & Kubernetes : Minikube install on AWS EC2
  86. Docker & Kubernetes : Cassandra with a StatefulSet
  87. Docker & Kubernetes : Terraform and AWS EKS
  88. Docker & Kubernetes : Pods and Service definitions
  89. Docker & Kubernetes : Service IP and the Service Type
  90. Docker & Kubernetes : Kubernetes DNS with Pods and Services
  91. Docker & Kubernetes : Headless service and discovering pods
  92. Docker & Kubernetes : Scaling and Updating application
  93. Docker & Kubernetes : Horizontal pod autoscaler on minikubes
  94. Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
  95. Docker & Kubernetes : Rolling updates
  96. Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
  97. Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
  98. Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
  99. Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
  100. Docker & Kubernetes : MongoDB / MongoExpress on Minikube
  101. Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
  102. Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
  103. Docker & Kubernetes : Nginx Ingress Controller on Minikube
  104. Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
  105. Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
  106. Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
  107. Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
  108. Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
  109. Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
  110. Docker & Kubernetes : StatefulSets on minikube
  111. Docker & Kubernetes : RBAC
  112. Docker & Kubernetes Service Account, RBAC, and IAM
  113. Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
  114. Docker & Kubernetes : Helm Chart
  115. Docker & Kubernetes : My first Helm deploy
  116. Docker & Kubernetes : Readiness and Liveness Probes
  117. Docker & Kubernetes : Helm chart repository with Github pages
  118. Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
  119. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
  120. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
  121. Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
  122. Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
  123. Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
  124. Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
  125. Docker & Kubernetes : Istio on EKS
  126. Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
  127. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
  128. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
  129. Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
  130. Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
  131. Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
  132. Docker & Kubernetes : Spinnaker on EKS with Halyard
  133. Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
  134. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
  135. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
  136. Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
  137. Docker & Kubernetes : Jenkins-X on EKS
  138. Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
  139. Docker & Kubernetes : ArgoCD on Kubernetes cluster
  140. Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook



Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Docker & K8s



Docker install on Amazon Linux AMI

Docker install on EC2 Ubuntu 14.04

Docker container vs Virtual Machine

Docker install on Ubuntu 14.04

Docker Hello World Application

Nginx image - share/copy files, Dockerfile

Working with Docker images : brief introduction

Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

More on docker run command (docker run -it, docker run --rm, etc.)

Docker Networks - Bridge Driver Network

Docker Persistent Storage

File sharing between host and container (docker run -d -p -v)

Linking containers and volume for datastore

Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

Dockerfile - Build Docker images automatically III - RUN

Dockerfile - Build Docker images automatically IV - CMD

Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

Docker - Apache Tomcat

Docker - NodeJS

Docker - NodeJS with hostname

Docker Compose - NodeJS with MongoDB

Docker - Prometheus and Grafana with Docker-compose

Docker - StatsD/Graphite/Grafana

Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

Docker : NodeJS with GCP Kubernetes Engine

Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

Docker : Jenkins Master and Slave

Docker - ELK : ElasticSearch, Logstash, and Kibana

Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

Docker - ELK 7.6 : Logstash on Centos 7

Docker - ELK 7.6 : Kibana on Centos 7 Part 1

Docker - ELK 7.6 : Kibana on Centos 7 Part 2

Docker - ELK 7.6 : Elastic Stack with Docker Compose

Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

Docker - Deploy Elastic Stack via Helm on minikube

Docker Compose - A gentle introduction with WordPress

Docker Compose - MySQL

MEAN Stack app on Docker containers : micro services

Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

Docker Compose with two containers - Flask REST API service container and an Apache server container

Docker compose : Nginx reverse proxy with multiple containers

Docker compose : Nginx reverse proxy with multiple containers

Docker & Kubernetes : Envoy - Getting started

Docker & Kubernetes : Envoy - Front Proxy

Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

Docker Packer

Docker Cheat Sheet

Docker Q & A

Kubernetes Q & A - Part I

Kubernetes Q & A - Part II

Docker - Run a React app in a docker

Docker - Run a React app in a docker II (snapshot app with nginx)

Docker - NodeJS and MySQL app with React in a docker

Docker - Step by Step NodeJS and MySQL app with React - I

Installing LAMP via puppet on Docker

Docker install via Puppet

Nginx Docker install via Ansible

Apache Hadoop CDH 5.8 Install with QuickStarts Docker

Docker - Deploying Flask app to ECS

Docker Compose - Deploying WordPress to AWS

Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

Docker - ECS Fargate

Docker - AWS ECS service discovery with Flask and Redis

Docker & Kubernetes: minikube version: v1.31.2, 2023

Docker & Kubernetes 1 : minikube

Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

Docker & Kubernetes 3 : minikube Django with Redis and Celery

Docker & Kubernetes 4 : Django with RDS via AWS Kops

Docker & Kubernetes : Kops on AWS

Docker & Kubernetes : Ingress controller on AWS with Kops

Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

Docker & Kubernetes : DaemonSet

Docker & Kubernetes : Secrets

Docker & Kubernetes : kubectl command

Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

Docker & Kubernetes : Configure a Pod to Use a ConfigMap

AWS : EKS (Elastic Container Service for Kubernetes)

Docker & Kubernetes : Run a React app in a minikube

Docker & Kubernetes : Minikube install on AWS EC2

Docker & Kubernetes : Cassandra with a StatefulSet

Docker & Kubernetes : Terraform and AWS EKS

Docker & Kubernetes : Pods and Service definitions

Docker & Kubernetes : Headless service and discovering pods

Docker & Kubernetes : Service IP and the Service Type

Docker & Kubernetes : Kubernetes DNS with Pods and Services

Docker & Kubernetes - Scaling and Updating application

Docker & Kubernetes : Horizontal pod autoscaler on minikubes

Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

Docker & Kubernetes : Rolling updates

Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

Docker & Kubernetes : Nginx Ingress Controller on minikube

Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

Docker & Kubernetes : MongoDB / MongoExpress on Minikube

Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : RBAC

Docker & Kubernetes Service Account, RBAC, and IAM

Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

Docker & Kubernetes : Helm Chart

Docker & Kubernetes : My first Helm deploy

Docker & Kubernetes : Readiness and Liveness Probes

Docker & Kubernetes : Helm chart repository with Github pages

Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

Docker & Kubernetes : Istio on EKS

Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

Docker & Kubernetes : Spinnaker on EKS with Halyard

Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

Docker & Kubernetes : Jenkins-X on EKS

Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

Docker & Kubernetes : ArgoCD on Kubernetes cluster

Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook




Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Ansible 2.0



What is Ansible?

Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

SSH connection & running commands

Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

Modules

Playbooks

Handlers

Roles

Playbook for LAMP HAProxy

Installing Nginx on a Docker container

AWS : Creating an ec2 instance & adding keys to authorized_keys

AWS : Auto Scaling via AMI

AWS : creating an ELB & registers an EC2 instance from the ELB

Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

Setting up Apache web server

Deploying a Go app to Minikube

Ansible with Terraform





Terraform



Introduction to Terraform with AWS elb & nginx

Terraform Tutorial - terraform format(tf) and interpolation(variables)

Terraform Tutorial - user_data

Terraform Tutorial - variables

Terraform 12 Tutorial - Loops with count, for_each, and for

Terraform Tutorial - creating multiple instances (count, list type and element() function)

Terraform Tutorial - State (terraform.tfstate) & terraform import

Terraform Tutorial - Output variables

Terraform Tutorial - Destroy

Terraform Tutorial - Modules

Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

Terraform Tutorial - AWS ASG and Modules

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

Terraform Tutorial - AWS ECS using Fargate : Part I

Hashicorp Vault

HashiCorp Vault Agent

HashiCorp Vault and Consul on AWS with Terraform

Ansible with Terraform

AWS IAM user, group, role, and policies - part 1

AWS IAM user, group, role, and policies - part 2

Delegate Access Across AWS Accounts Using IAM Roles

AWS KMS

terraform import & terraformer import

Terraform commands cheat sheet

Terraform Cloud

Terraform 14

Creating Private TLS Certs





DevOps



Phases of Continuous Integration

Software development methodology

Introduction to DevOps

Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

Artifact repository and repository management

Linux - General, shell programming, processes & signals ...

RabbitMQ...

MariaDB

New Relic APM with NodeJS : simple agent setup on AWS instance

Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

Datadog - Monitoring with PagerDuty/HipChat and APM

Install and Configure Mesos Cluster

Cassandra on a Single-Node Cluster

Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

OpenStack install on Ubuntu 16.04 server - DevStack

AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

CI/CD with CircleCI - Heroku deploy

Introduction to Terraform with AWS elb & nginx

Docker & Kubernetes

Kubernetes I - Running Kubernetes Locally via Minikube

Kubernetes II - kops on AWS

Kubernetes III - kubeadm on AWS

AWS : EKS (Elastic Container Service for Kubernetes)

CI/CD Github actions

CI/CD Gitlab



DevOps / Sys Admin Q & A



(1A) - Linux Commands

(1B) - Linux Commands

(2) - Networks

(2B) - Networks

(3) - Linux Systems

(4) - Scripting (Ruby/Shell)

(5) - Configuration Management

(6) - AWS VPC setup (public/private subnets with NAT)

(6B) - AWS VPC Peering

(7) - Web server

(8) - Database

(9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

(10) - Trouble Shooting: Load, Throughput, Response time and Leaks

(11) - SSH key pairs, SSL Certificate, and SSL Handshake

(12) - Why is the database slow?

(13) - Is my web site down?

(14) - Is my server down?

(15) - Why is the server sluggish?

(16A) - Serving multiple domains using Virtual Hosts - Apache

(16B) - Serving multiple domains using server block - Nginx

(16C) - Reverse proxy servers and load balancers - Nginx

(17) - Linux startup process

(18) - phpMyAdmin with Nginx virtual host as a subdomain

(19) - How to SSH login without password?

(20) - Log Rotation

(21) - Monitoring Metrics

(22) - lsof

(23) - Wireshark introduction

(24) - User account management

(25) - Domain Name System (DNS)

(26) - NGINX SSL/TLS, Caching, and Session

(27) - Troubleshooting 5xx server errors

(28) - Linux Systemd: journalctl

(29) - Linux Systemd: FirewallD

(30) - Linux: SELinux

(31) - Linux: Samba

(0) - Linux Sys Admin's Day to Day tasks





Jenkins



Install

Configuration - Manage Jenkins - security setup

Adding job and build

Scheduling jobs

Managing_plugins

Git/GitHub plugins, SSH keys configuration, and Fork/Clone

JDK & Maven setup

Build configuration for GitHub Java application with Maven

Build Action for GitHub Java application with Maven - Console Output, Updating Maven

Commit to changes to GitHub & new test results - Build Failure

Commit to changes to GitHub & new test results - Successful Build

Adding code coverage and metrics

Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

Jenkins on EC2 - Creating a Maven project

Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

Jenkins on EC2 - Line Coverage with JaCoCo plugin

Setting up Master and Slave nodes

Jenkins Build Pipeline & Dependency Graph Plugins

Jenkins Build Flow Plugin

Pipeline Jenkinsfile with Classic / Blue Ocean

Jenkins Setting up Slave nodes on AWS

Jenkins Q & A





Puppet



Puppet with Amazon AWS I - Puppet accounts

Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

Puppet with Amazon AWS III - Puppet running Hello World

Puppet Code Basics - Terminology

Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

Puppet master /agent ubuntu 14.04 install on EC2 nodes

Puppet master post install tasks - master's names and certificates setup,

Puppet agent post install tasks - configure agent, hostnames, and sign request

EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

EC2 Puppet - Install lamp with a manifest ('puppet apply')

EC2 Puppet - Install lamp with a module

Puppet variable scope

Puppet packages, services, and files

Puppet packages, services, and files II with nginx Puppet templates

Puppet creating and managing user accounts with SSH access

Puppet Locking user accounts & deploying sudoers file

Puppet exec resource

Puppet classes and modules

Puppet Forge modules

Puppet Express

Puppet Express 2

Puppet 4 : Changes

Puppet --configprint

Puppet with Docker

Puppet 6.0.2 install on Ubuntu 18.04





Chef



What is Chef?

Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

Setting up Hosted Chef server

VirtualBox via Vagrant with Chef client provision

Creating and using cookbooks on a VirtualBox node

Chef server install on Ubuntu 14.04

Chef workstation setup on EC2 Ubuntu 14.04

Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





Elasticsearch search engine, Logstash, and Kibana



Elasticsearch, search engine

Logstash with Elasticsearch

Logstash, Elasticsearch, and Kibana 4

Elasticsearch with Redis broker and Logstash Shipper and Indexer

Samples of ELK architecture

Elasticsearch indexing performance



Vagrant



VirtualBox & Vagrant install on Ubuntu 14.04

Creating a VirtualBox using Vagrant

Provisioning

Networking - Port Forwarding

Vagrant Share

Vagrant Rebuild & Teardown

Vagrant & Ansible





Big Data & Hadoop Tutorials



Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

Hadoop - Running MapReduce Job

Hadoop - Ecosystem

CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

CDH5 APIs

QuickStart VMs for CDH 5.3

QuickStart VMs for CDH 5.3 II - Testing with wordcount

QuickStart VMs for CDH 5.3 II - Hive DB query

Scheduled start and stop CDH services

CDH 5.8 Install with QuickStarts Docker

Zookeeper & Kafka Install

Zookeeper & Kafka - single node single broker

Zookeeper & Kafka - Single node and multiple brokers

OLTP vs OLAP

Apache Hadoop Tutorial I with CDH - Overview

Apache Hadoop Tutorial II with CDH - MapReduce Word Count

Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

Apache Hadoop (CDH 5) Hive Introduction

CDH5 - Hive Upgrade to 1.3 to from 1.2

Apache Hive 2.1.0 install on Ubuntu 16.04

Apache HBase in Pseudo-Distributed mode

Creating HBase table with HBase shell and HUE

Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

Creating HBase table with Java API

HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

Flume with CDH5: a single-node Flume deployment (telnet example)

Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

List of Apache Hadoop hdfs commands

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

Apache Hadoop : Creating Wordcount Maven Project with Eclipse

Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

Spark 1.2 using VirtualBox and QuickStart VM - wordcount

Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

Apache Spark 2.0.2 tutorial with PySpark : RDD

Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

Apache Spark Streaming with Kafka and Cassandra

Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

Apache Spark 1.2 Streaming

Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

Apache Drill - Query File System, JSON, and Parquet

Apache Drill - HBase query

Apache Drill - Hive query

Apache Drill - MongoDB query





Redis In-Memory Database



Redis vs Memcached

Redis 3.0.1 Install

Setting up multiple server instances on a Linux host

Redis with Python

ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer



GCP (Google Cloud Platform)



GCP: Creating an Instance

GCP: gcloud compute command-line tool

GCP: Deploying Containers

GCP: Kubernetes Quickstart

GCP: Deploying a containerized web application via Kubernetes

GCP: Django Deploy via Kubernetes I (local)

GCP: Django Deploy via Kubernetes II (GKE)





AWS (Amazon Web Services)



AWS : EKS (Elastic Container Service for Kubernetes)

AWS : Creating a snapshot (cloning an image)

AWS : Attaching Amazon EBS volume to an instance

AWS : Adding swap space to an attached volume via mkswap and swapon

AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

AWS : Creating an instance to a new region by copying an AMI

AWS : S3 (Simple Storage Service) 1

AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

AWS : S3 (Simple Storage Service) 4 - Uploading a large file

AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

AWS : Creating a CloudFront distribution with an Amazon S3 origin

AWS : Creating VPC with CloudFormation

WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

AWS : CloudWatch & Logs with Lambda Function / S3

AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

AWS : Lambda and SNS - cross account

AWS : CLI (Command Line Interface)

AWS : CLI (ECS with ALB & autoscaling)

AWS : ECS with cloudformation and json task definition

AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

AWS : Load Balancing with HAProxy (High Availability Proxy)

AWS : VirtualBox on EC2

AWS : NTP setup on EC2

AWS: jq with AWS

AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

AWS : OpenVPN Access Server 2 Install

AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

AWS : Autoscaling group (ASG)

AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

AWS : Adding a SSH User Account on Linux Instance

AWS : Windows Servers - Remote Desktop Connections using RDP

AWS : Scheduled stopping and starting an instance - python & cron

AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

AWS : Elastic Beanstalk with NodeJS

AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

AWS : Identity and Access Management (IAM) Roles for Amazon EC2

AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

AWS : Identity and Access Management (IAM) sts assume role via aws cli2

AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

AWS : Amazon Route 53

AWS : Amazon Route 53 - DNS (Domain Name Server) setup

AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

AWS Amazon Route 53 : Private Hosted Zone

AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

AWS : Lambda with AWS CloudTrail

AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

AWS : Redshift data warehouse

AWS : CloudFormation - templates, change sets, and CLI

AWS : CloudFormation Bootstrap UserData/Metadata

AWS : CloudFormation - Creating an ASG with rolling update

AWS : Cloudformation Cross-stack reference

AWS : OpsWorks

AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

AWS CodeDeploy : Deploy an Application from GitHub

AWS EC2 Container Service (ECS)

AWS EC2 Container Service (ECS) II

AWS Hello World Lambda Function

AWS Lambda Function Q & A

AWS Node.js Lambda Function & API Gateway

AWS API Gateway endpoint invoking Lambda function

AWS API Gateway invoking Lambda function with Terraform

AWS API Gateway invoking Lambda function with Terraform - Lambda Container

Amazon Kinesis Streams

Kinesis Data Firehose with Lambda and ElasticSearch

Amazon DynamoDB

Amazon DynamoDB with Lambda and CloudWatch

Loading DynamoDB stream to AWS Elasticsearch service with Lambda

Amazon ML (Machine Learning)

Simple Systems Manager (SSM)

AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

AWS : RDS Importing and Exporting SQL Server Data

AWS : RDS PostgreSQL & pgAdmin III

AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

AWS : MySQL Replication : Master-slave

AWS : MySQL backup & restore

AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

AWS : Restoring Postgres on EC2 instance from S3 backup

AWS : Q & A

AWS : Security

AWS : Security groups vs. network ACLs

AWS : Scaling-Up

AWS : Networking

AWS : Single Sign-on (SSO) with Okta

AWS : JIT (Just-in-Time) with Okta





Powershell 4 Tutorial



Powersehll : Introduction

Powersehll : Help System

Powersehll : Running commands

Powersehll : Providers

Powersehll : Pipeline

Powersehll : Objects

Powershell : Remote Control

Windows Management Instrumentation (WMI)

How to Enable Multiple RDP Sessions in Windows 2012 Server

How to install and configure FTP server on IIS 8 in Windows 2012 Server

How to Run Exe as a Service on Windows 2012 Server

SQL Inner, Left, Right, and Outer Joins





Git/GitHub Tutorial



One page express tutorial for GIT and GitHub

Installation

add/status/log

commit and diff

git commit --amend

Deleting and Renaming files

Undoing Things : File Checkout & Unstaging

Reverting commit

Soft Reset - (git reset --soft <SHA key>)

Mixed Reset - Default

Hard Reset - (git reset --hard <SHA key>)

Creating & switching Branches

Fast-forward merge

Rebase & Three-way merge

Merge conflicts with a simple example

GitHub Account and SSH

Uploading to GitHub

GUI

Branching & Merging

Merging conflicts

GIT on Ubuntu and OS X - Focused on Branching

Setting up a remote repository / pushing local project and cloning the remote repo

Fork vs Clone, Origin vs Upstream

Git/GitHub Terminologies

Git/GitHub via SourceTree II : Branching & Merging

Git/GitHub via SourceTree III : Git Work Flow

Git/GitHub via SourceTree IV : Git Reset

Git wiki - quick command reference






Subversion

Subversion Install On Ubuntu 14.04

Subversion creating and accessing I

Subversion creating and accessing II








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master