BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Kubernetes II - kops (kubernetes-ops) on AWS

Kubernetes-Icon.png




Bookmark and Share





bogotobogo.com site search:


Introduction

In this post, using kops installed on my Mac, we'll make Kubernetes cluster consists of 1 master (t3.medium instance will be used in this post but the smaller instances such as t3.micro or t3.small will work as well) and 2 worker nodes (t2.nano). Note that the t3 is more suitable for the master because it is better tuned for the burst nature of the master node.

Then, we are going to deploy two types of services: NodePort and LoadBalancer.



What is kops

kops (kubernetes-ops) helps us create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and OpenStack in beta support, and VMware vSphere in alpha, and other platforms planned.


Kubernetes provides us with highly resilient infrastructure, automatic scaling (rollback), and self-healing of containers. It provides us REST APIs, thus hiding the complicated management of containers from us.

In short, we can think of it as kubectl for clusters.

One of the ways kops integrates with the existing Kubernetes community and software is that the main interaction with kops is via a relatively easy to use, declarative syntax command line utility, that's a lot like kubectl.

At a basic level, the difference between kops and kubectl is that kubectl interacts with applications or services deployed in your cluster, while kops interacts with entire clusters at a time. Instead of scaling an application deployment to 30 replicas like before, you might scale the number of nodes to 30 with a few quick declarative commands. As the administrator, that means you're not responsible for individually provisioning all the extra compute instances, and you don't have to rely on traditional DevOps tools like Chef or Ansible to configure them.

If you give kops a job within its scope, it gets it done and helps direct your choices to industry best practices. In addition, creating and managing Kubernetes clusters with kops is much faster than the first generation, more generalized DevOps tools. We're talking about under 20 minutes for a highly available multiple master cluster of essentially any size in private network topology. All it takes is a few choices, and kops works out the details of how to get it done.

- from kops 101 - The Kubernetes Deployment Game-Changer

In this tutorial, we'll be Installing Kubernetes on AWS with kops.





Kubernetes Terminology
  1. Nodes:
    Hosts that run Kubernetes applications
  2. Maser:
    The Master is responsible for managing the cluster. The master coordinates all activities in our cluster (scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates).
  3. Containers:
    Units of packaging
  4. kubelets:
    Each node has a Kubelet. It is an agent for managing the node and communicating with the Kubernetes master
  5. Pods:
    Units of deployment which is collection of containers
  6. Replication Controller:
    Ensures availability and scalability
  7. Labels:
    Key-value pairs for identification
  8. Services:
    Collection of pods exposed as an endpoint




Install kubectl

We must have kubectl installed in order to control Kubernetes clusters. As described earlier, kubectl is a command line tool that interacts with kube-apiserver and send commands to the master node and each kubectl command is converted into an API call.

On Linux, we can install it as binary using native Ubamtu/Debian package management:

$ sudo apt-get update && sudo apt-get install -y apt-transport-https
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo touch /etc/apt/sources.list.d/kubernetes.list 
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubectl

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
...




Install AWS Cli

We must have kubectl installed in order for kops to work with AWS. As described earlier, kubectl is a command line tool that interacts with kube-apiserver and send commands to the master node and each kubectl command is converted into an API call.

On Linux, we can install it as follows:

$ sudo pip install awscli




Install kops

Download kops from the releases page :

$ curl -LO  https://github.com/kubernetes/kops/releases/download/v1.18.0/kops-darwin-amd64
$ chmod +x ./kops-darwin-amd64
$ sudo mv ./kops-darwin-amd64 /usr/local/bin/kops





Create a route53 domain for our cluster

kops is an opinionated provisioning system:

  1. Fully automated installation
  2. Uses DNS to identify clusters
  3. Self-healing: everything runs in Auto-Scaling Groups
  4. Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS)
  5. High-Availability support
  6. Can directly provision, or generate terraform manifests

kops uses DNS for discovery, both inside the cluster and so that we can reach the kubernetes API server from clients. We will use one of my already existing domains in Route53, pykey.com.

kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so we will no longer get our clusters confused, we can share clusters with our colleagues unambiguously, and we can reach them without relying on remembering an IP address.

A Route53 hosted zone can serve subdomains. We can use subdomains to divide our clusters. As for our example, we will use useast1.dev.pykey.com. The API server endpoint will then be api.useast1.dev.pykey.com.

We can double-check that our cluster is configured correctly if we have the dig tool by running:

$ dig ns dev.pykey.com
...
;; ANSWER SECTION:
dev.pykey.com.		300	IN	CNAME	pykey.com.
pykey.com.		172800	IN	NS	ns-569.awsdns-07.net.
pykey.com.		172800	IN	NS	ns-105.awsdns-13.com.
pykey.com.		172800	IN	NS	ns-1158.awsdns-16.org.
pykey.com.		172800	IN	NS	ns-1785.awsdns-31.co.uk.
...

We should see the 4 NS records that Route53 assigned our hosted zone. This is a critical component when setting up clusters. If we are experiencing problems with the Kubernetes API not coming up, chances are something is wrong with the cluster's DNS.





Create a S3 bucket to store our clusters state

kops lets us manage our clusters even after installation. To do this, it must keep track of the clusters that we have created, along with their configuration, the keys they are using etc. This information is stored in an S3 bucket. S3 permissions are used to control access to the bucket.

In our example, we chose dev.pykey.com as our hosted zone, so let's pick clusters.dev.pykey.com as the S3 bucket name.

Let's create the S3 bucket using:

$ aws s3 mb s3://clusters.dev.pykey.com
make_bucket: clusters.dev.pykey.com

We can export KOPS_STATE_STORE=s3://clusters.dev.pykey.com and then kops will use this location by default:

$ export KOPS_STATE_STORE=s3://clusters.dev.pykey.com    

We may want to put this in our bash profile.





Build cluster configuration

The following commands are used in this section:

  1. kops create cluster: creates a cluster configuration. It does NOT actually create the cloud resources. This give us an opportunity to review the configuration or change it.
  2. kops update cluster: it does actually create the cloud resources.

Run kops create cluster to create our cluster configuration:

$ kops create cluster --zones=us-east-1a,us-east-1b,us-east-1c useast1.dev.pykey.com
I0903 14:24:04.227661   49380 create_cluster.go:557] Inferred --cloud=aws from zone "us-east-1a"
I0903 14:24:04.649711   49380 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet us-east-1a
I0903 14:24:04.649756   49380 subnets.go:184] Assigned CIDR 172.20.64.0/19 to subnet us-east-1b
I0903 14:24:04.649778   49380 subnets.go:184] Assigned CIDR 172.20.96.0/19 to subnet us-east-1c
I0903 14:24:07.083525   49380 create_cluster.go:1547] Using SSH public key: /Users/kihyuckhong/.ssh/id_rsa.pub
Previewing changes that will be made:

I0903 14:24:09.588591   49380 executor.go:103] Tasks: 0 done / 91 total; 43 can run
I0903 14:24:10.950585   49380 executor.go:103] Tasks: 43 done / 91 total; 28 can run
I0903 14:24:11.651943   49380 executor.go:103] Tasks: 71 done / 91 total; 18 can run
I0903 14:24:12.270655   49380 executor.go:103] Tasks: 89 done / 91 total; 2 can run
I0903 14:24:12.420486   49380 executor.go:103] Tasks: 91 done / 91 total; 0 can run
Will create resources:
  AutoscalingGroup/master-us-east-1a.masters.useast1.dev.pykey.com
  	Granularity         	1Minute
  	LaunchConfiguration 	name:master-us-east-1a.masters.useast1.dev.pykey.com
  	MaxSize             	1
  	Metrics             	[GroupDesiredCapacity, GroupInServiceInstances, GroupMaxSize, GroupMinSize, GroupPendingInstances, GroupStandbyInstances, GroupTerminatingInstances, GroupTotalInstances]
  	MinSize             	1
  	Subnets             	[name:us-east-1a.useast1.dev.pykey.com]
  	SuspendProcesses    	[]
  	Tags                	{Name: master-us-east-1a.masters.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: master-us-east-1a, k8s.io/role/master: 1, kops.k8s.io/instancegroup: master-us-east-1a}

  AutoscalingGroup/nodes.useast1.dev.pykey.com
  	Granularity         	1Minute
  	LaunchConfiguration 	name:nodes.useast1.dev.pykey.com
  	MaxSize             	2
  	Metrics             	[GroupDesiredCapacity, GroupInServiceInstances, GroupMaxSize, GroupMinSize, GroupPendingInstances, GroupStandbyInstances, GroupTerminatingInstances, GroupTotalInstances]
  	MinSize             	2
  	Subnets             	[name:us-east-1a.useast1.dev.pykey.com, name:us-east-1b.useast1.dev.pykey.com, name:us-east-1c.useast1.dev.pykey.com]
  	SuspendProcesses    	[]
  	Tags                	{Name: nodes.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: nodes, k8s.io/role/node: 1, kops.k8s.io/instancegroup: nodes}

  DHCPOptions/useast1.dev.pykey.com
  	DomainName          	ec2.internal
  	DomainNameServers   	AmazonProvidedDNS
  	Shared              	false
  	Tags                	{Name: useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  EBSVolume/a.etcd-events.useast1.dev.pykey.com
  	AvailabilityZone    	us-east-1a
  	Encrypted           	false
  	SizeGB              	20
  	Tags                	{kubernetes.io/cluster/useast1.dev.pykey.com: owned, Name: a.etcd-events.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, k8s.io/etcd/events: a/a, k8s.io/role/master: 1}
  	VolumeType          	gp2

  EBSVolume/a.etcd-main.useast1.dev.pykey.com
  	AvailabilityZone    	us-east-1a
  	Encrypted           	false
  	SizeGB              	20
  	Tags                	{k8s.io/etcd/main: a/a, k8s.io/role/master: 1, kubernetes.io/cluster/useast1.dev.pykey.com: owned, Name: a.etcd-main.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com}
  	VolumeType          	gp2

  IAMInstanceProfile/masters.useast1.dev.pykey.com
  	Shared              	false

  IAMInstanceProfile/nodes.useast1.dev.pykey.com
  	Shared              	false

  IAMInstanceProfileRole/masters.useast1.dev.pykey.com
  	InstanceProfile     	name:masters.useast1.dev.pykey.com id:masters.useast1.dev.pykey.com
  	Role                	name:masters.useast1.dev.pykey.com

  IAMInstanceProfileRole/nodes.useast1.dev.pykey.com
  	InstanceProfile     	name:nodes.useast1.dev.pykey.com id:nodes.useast1.dev.pykey.com
  	Role                	name:nodes.useast1.dev.pykey.com

  IAMRole/masters.useast1.dev.pykey.com
  	ExportWithID        	masters

  IAMRole/nodes.useast1.dev.pykey.com
  	ExportWithID        	nodes

  IAMRolePolicy/master-policyoverride
  	Role                	name:masters.useast1.dev.pykey.com
  	Managed             	true

  IAMRolePolicy/masters.useast1.dev.pykey.com
  	Role                	name:masters.useast1.dev.pykey.com
  	Managed             	false

  IAMRolePolicy/node-policyoverride
  	Role                	name:nodes.useast1.dev.pykey.com
  	Managed             	true

  IAMRolePolicy/nodes.useast1.dev.pykey.com
  	Role                	name:nodes.useast1.dev.pykey.com
  	Managed             	false

  InternetGateway/useast1.dev.pykey.com
  	VPC                 	name:useast1.dev.pykey.com
  	Shared              	false
  	Tags                	{Name: useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  Keypair/apiserver-aggregator
  	Signer              	name:apiserver-aggregator-ca id:cn=apiserver-aggregator-ca
  	Subject             	cn=aggregator
  	Type                	client
  	LegacyFormat        	false

  Keypair/apiserver-aggregator-ca
  	Subject             	cn=apiserver-aggregator-ca
  	Type                	ca
  	LegacyFormat        	false

  Keypair/apiserver-proxy-client
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=apiserver-proxy-client
  	Type                	client
  	LegacyFormat        	false

  Keypair/ca
  	Subject             	cn=kubernetes
  	Type                	ca
  	LegacyFormat        	false

  Keypair/etcd-clients-ca
  	Subject             	cn=etcd-clients-ca
  	Type                	ca
  	LegacyFormat        	false

  Keypair/etcd-manager-ca-events
  	Subject             	cn=etcd-manager-ca-events
  	Type                	ca
  	LegacyFormat        	false

  Keypair/etcd-manager-ca-main
  	Subject             	cn=etcd-manager-ca-main
  	Type                	ca
  	LegacyFormat        	false

  Keypair/etcd-peers-ca-events
  	Subject             	cn=etcd-peers-ca-events
  	Type                	ca
  	LegacyFormat        	false

  Keypair/etcd-peers-ca-main
  	Subject             	cn=etcd-peers-ca-main
  	Type                	ca
  	LegacyFormat        	false

  Keypair/kops
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	o=system:masters,cn=kops
  	Type                	client
  	LegacyFormat        	false

  Keypair/kube-controller-manager
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=system:kube-controller-manager
  	Type                	client
  	LegacyFormat        	false

  Keypair/kube-proxy
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=system:kube-proxy
  	Type                	client
  	LegacyFormat        	false

  Keypair/kube-scheduler
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=system:kube-scheduler
  	Type                	client
  	LegacyFormat        	false

  Keypair/kubecfg
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	o=system:masters,cn=kubecfg
  	Type                	client
  	LegacyFormat        	false

  Keypair/kubelet
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	o=system:nodes,cn=kubelet
  	Type                	client
  	LegacyFormat        	false

  Keypair/kubelet-api
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=kubelet-api
  	Type                	client
  	LegacyFormat        	false

  Keypair/master
  	AlternateNames      	[100.64.0.1, 127.0.0.1, api.internal.useast1.dev.pykey.com, api.useast1.dev.pykey.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]
  	Signer              	name:ca id:cn=kubernetes
  	Subject             	cn=kubernetes-master
  	Type                	server
  	LegacyFormat        	false

  LaunchConfiguration/master-us-east-1a.masters.useast1.dev.pykey.com
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:masters.useast1.dev.pykey.com id:masters.useast1.dev.pykey.com
  	ImageID             	099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
  	InstanceType        	t3.medium
  	RootVolumeDeleteOnTermination	true
  	RootVolumeSize      	64
  	RootVolumeType      	gp2
  	SSHKey              	name:kubernetes.useast1.dev.pykey.com-f4:e6:ca:e1:b3:ee:fc:6c:91:13:3b:a8:f5:36:e4:92 id:kubernetes.useast1.dev.pykey.com-f4:e6:ca:e1:b3:ee:fc:6c:91:13:3b:a8:f5:36:e4:92
  	SecurityGroups      	[name:masters.useast1.dev.pykey.com]
  	SpotPrice           	

  LaunchConfiguration/nodes.useast1.dev.pykey.com
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:nodes.useast1.dev.pykey.com id:nodes.useast1.dev.pykey.com
  	ImageID             	099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
  	InstanceType        	t3.medium
  	RootVolumeDeleteOnTermination	true
  	RootVolumeSize      	128
  	RootVolumeType      	gp2
  	SSHKey              	name:kubernetes.useast1.dev.pykey.com-f4:e6:ca:e1:b3:ee:fc:6c:91:13:3b:a8:f5:36:e4:92 id:kubernetes.useast1.dev.pykey.com-f4:e6:ca:e1:b3:ee:fc:6c:91:13:3b:a8:f5:36:e4:92
  	SecurityGroups      	[name:nodes.useast1.dev.pykey.com]
  	SpotPrice           	

  ManagedFile/etcd-cluster-spec-events
  	Base                	s3://clusters.dev.pykey.com/useast1.dev.pykey.com/backups/etcd/events
  	Location            	/control/etcd-cluster-spec

  ManagedFile/etcd-cluster-spec-main
  	Base                	s3://clusters.dev.pykey.com/useast1.dev.pykey.com/backups/etcd/main
  	Location            	/control/etcd-cluster-spec

  ManagedFile/manifests-etcdmanager-events
  	Location            	manifests/etcd/events.yaml

  ManagedFile/manifests-etcdmanager-main
  	Location            	manifests/etcd/main.yaml

  ManagedFile/manifests-static-kube-apiserver-healthcheck
  	Location            	manifests/static/kube-apiserver-healthcheck.yaml

  ManagedFile/useast1.dev.pykey.com-addons-bootstrap
  	Location            	addons/bootstrap-channel.yaml

  ManagedFile/useast1.dev.pykey.com-addons-core.addons.k8s.io
  	Location            	addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/useast1.dev.pykey.com-addons-dns-controller.addons.k8s.io-k8s-1.12
  	Location            	addons/dns-controller.addons.k8s.io/k8s-1.12.yaml

  ManagedFile/useast1.dev.pykey.com-addons-dns-controller.addons.k8s.io-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/useast1.dev.pykey.com-addons-kops-controller.addons.k8s.io-k8s-1.16
  	Location            	addons/kops-controller.addons.k8s.io/k8s-1.16.yaml

  ManagedFile/useast1.dev.pykey.com-addons-kube-dns.addons.k8s.io-k8s-1.12
  	Location            	addons/kube-dns.addons.k8s.io/k8s-1.12.yaml

  ManagedFile/useast1.dev.pykey.com-addons-kube-dns.addons.k8s.io-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/useast1.dev.pykey.com-addons-kubelet-api.rbac.addons.k8s.io-k8s-1.9
  	Location            	addons/kubelet-api.rbac.addons.k8s.io/k8s-1.9.yaml

  ManagedFile/useast1.dev.pykey.com-addons-limit-range.addons.k8s.io
  	Location            	addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/useast1.dev.pykey.com-addons-rbac.addons.k8s.io-k8s-1.8
  	Location            	addons/rbac.addons.k8s.io/k8s-1.8.yaml

  ManagedFile/useast1.dev.pykey.com-addons-storage-aws.addons.k8s.io-v1.15.0
  	Location            	addons/storage-aws.addons.k8s.io/v1.15.0.yaml

  ManagedFile/useast1.dev.pykey.com-addons-storage-aws.addons.k8s.io-v1.7.0
  	Location            	addons/storage-aws.addons.k8s.io/v1.7.0.yaml

  Route/0.0.0.0/0
  	RouteTable          	name:useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	InternetGateway     	name:useast1.dev.pykey.com

  RouteTable/useast1.dev.pykey.com
  	VPC                 	name:useast1.dev.pykey.com
  	Shared              	false
  	Tags                	{kubernetes.io/cluster/useast1.dev.pykey.com: owned, kubernetes.io/kops/role: public, Name: useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com}

  RouteTableAssociation/us-east-1a.useast1.dev.pykey.com
  	RouteTable          	name:useast1.dev.pykey.com
  	Subnet              	name:us-east-1a.useast1.dev.pykey.com

  RouteTableAssociation/us-east-1b.useast1.dev.pykey.com
  	RouteTable          	name:useast1.dev.pykey.com
  	Subnet              	name:us-east-1b.useast1.dev.pykey.com

  RouteTableAssociation/us-east-1c.useast1.dev.pykey.com
  	RouteTable          	name:useast1.dev.pykey.com
  	Subnet              	name:us-east-1c.useast1.dev.pykey.com

  SSHKey/kubernetes.useast1.dev.pykey.com-f4:e6:ca:e1:b3:ee:fc:6c:91:13:3b:a8:f5:36:e4:92
  	KeyFingerprint      	99:18:2b:e4:dd:e8:4c:87:fb:b7:4d:ba:94:74:3a:bb

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system:controller_manager

  Secret/system:dns

  Secret/system:logging

  Secret/system:monitoring

  Secret/system:scheduler

  SecurityGroup/masters.useast1.dev.pykey.com
  	Description         	Security group for masters
  	VPC                 	name:useast1.dev.pykey.com
  	RemoveExtraRules    	[port=22, port=443, port=2380, port=2381, port=4001, port=4002, port=4789, port=179]
  	Tags                	{Name: masters.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  SecurityGroup/nodes.useast1.dev.pykey.com
  	Description         	Security group for nodes
  	VPC                 	name:useast1.dev.pykey.com
  	RemoveExtraRules    	[port=22]
  	Tags                	{Name: nodes.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  SecurityGroupRule/all-master-to-master
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	SourceGroup         	name:masters.useast1.dev.pykey.com

  SecurityGroupRule/all-master-to-node
  	SecurityGroup       	name:nodes.useast1.dev.pykey.com
  	SourceGroup         	name:masters.useast1.dev.pykey.com

  SecurityGroupRule/all-node-to-node
  	SecurityGroup       	name:nodes.useast1.dev.pykey.com
  	SourceGroup         	name:nodes.useast1.dev.pykey.com

  SecurityGroupRule/https-external-to-master-0.0.0.0/0
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443

  SecurityGroupRule/master-egress
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-egress
  	SecurityGroup       	name:nodes.useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-to-master-tcp-1-2379
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	Protocol            	tcp
  	FromPort            	1
  	ToPort              	2379
  	SourceGroup         	name:nodes.useast1.dev.pykey.com

  SecurityGroupRule/node-to-master-tcp-2382-4000
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	Protocol            	tcp
  	FromPort            	2382
  	ToPort              	4000
  	SourceGroup         	name:nodes.useast1.dev.pykey.com

  SecurityGroupRule/node-to-master-tcp-4003-65535
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	Protocol            	tcp
  	FromPort            	4003
  	ToPort              	65535
  	SourceGroup         	name:nodes.useast1.dev.pykey.com

  SecurityGroupRule/node-to-master-udp-1-65535
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	Protocol            	udp
  	FromPort            	1
  	ToPort              	65535
  	SourceGroup         	name:nodes.useast1.dev.pykey.com

  SecurityGroupRule/ssh-external-to-master-0.0.0.0/0
  	SecurityGroup       	name:masters.useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  SecurityGroupRule/ssh-external-to-node-0.0.0.0/0
  	SecurityGroup       	name:nodes.useast1.dev.pykey.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  Subnet/us-east-1a.useast1.dev.pykey.com
  	ShortName           	us-east-1a
  	VPC                 	name:useast1.dev.pykey.com
  	AvailabilityZone    	us-east-1a
  	CIDR                	172.20.32.0/19
  	Shared              	false
  	Tags                	{SubnetType: Public, kubernetes.io/role/elb: 1, Name: us-east-1a.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  Subnet/us-east-1b.useast1.dev.pykey.com
  	ShortName           	us-east-1b
  	VPC                 	name:useast1.dev.pykey.com
  	AvailabilityZone    	us-east-1b
  	CIDR                	172.20.64.0/19
  	Shared              	false
  	Tags                	{kubernetes.io/role/elb: 1, Name: us-east-1b.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned, SubnetType: Public}

  Subnet/us-east-1c.useast1.dev.pykey.com
  	ShortName           	us-east-1c
  	VPC                 	name:useast1.dev.pykey.com
  	AvailabilityZone    	us-east-1c
  	CIDR                	172.20.96.0/19
  	Shared              	false
  	Tags                	{Name: us-east-1c.useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned, SubnetType: Public, kubernetes.io/role/elb: 1}

  VPC/useast1.dev.pykey.com
  	CIDR                	172.20.0.0/16
  	EnableDNSHostnames  	true
  	EnableDNSSupport    	true
  	Shared              	false
  	Tags                	{Name: useast1.dev.pykey.com, KubernetesCluster: useast1.dev.pykey.com, kubernetes.io/cluster/useast1.dev.pykey.com: owned}

  VPCDHCPOptionsAssociation/useast1.dev.pykey.com
  	VPC                 	name:useast1.dev.pykey.com
  	DHCPOptions         	name:useast1.dev.pykey.com

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster useast1.dev.pykey.com
 * edit your node instance group: kops edit ig --name=useast1.dev.pykey.com nodes
 * edit your master instance group: kops edit ig --name=useast1.dev.pykey.com master-us-east-1a

Finally configure your cluster with: kops update cluster --name useast1.dev.pykey.com --yes


The kops create cluster command created the configuration for our cluster. Note that it only created the configuration, it did not actually create the cloud resources - we'll do that in the next step with a kops update cluster. This gives us an opportunity to review the configuration or change it.

It prints commands we can use to explore further:

  1. list clusters with: kops get cluster:
    $ kops get cluster --state s3://clusters.dev.pykey.com
    NAME			CLOUD	ZONES
    useast1.dev.pykey.com	aws	us-east-1a,us-east-1b,us-east-1c
    

    Or after setting export KOPS_STATE_STOR=s3://clusters.dev.pykey.com:

    $ kops get cluster
    NAME			CLOUD	ZONES
    useast1.dev.pykey.com	aws	us-east-1a,us-east-1b,us-east-1c
    

  2. edit this cluster with: kops edit cluster useast1.dev.pykey.com:
    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: kops.k8s.io/v1alpha2
    kind: Cluster
    metadata:
      creationTimestamp: "2020-09-03T21:24:06Z"
      name: useast1.dev.pykey.com
    spec:
      api:
        dns: {}
      authorization:
        rbac: {}
      channel: stable
      cloudProvider: aws
      configBase: s3://clusters.dev.pykey.com/useast1.dev.pykey.com
      containerRuntime: docker
      etcdClusters:
      - cpuRequest: 200m
        etcdMembers:
        - instanceGroup: master-us-east-1a
          name: a
        memoryRequest: 100Mi
        name: main
      - cpuRequest: 100m
        etcdMembers:
        - instanceGroup: master-us-east-1a
          name: a
        memoryRequest: 100Mi
        name: events
      iam:
        allowContainerRegistry: true
        legacy: false
      kubelet:
        anonymousAuth: false
      kubernetesApiAccess:
      - 0.0.0.0/0
      kubernetesVersion: 1.18.8
      masterInternalName: api.internal.useast1.dev.pykey.com
      masterPublicName: api.useast1.dev.pykey.com
      networkCIDR: 172.20.0.0/16
      networking:
        kubenet: {}
      nonMasqueradeCIDR: 100.64.0.0/10
      sshAccess:
      - 0.0.0.0/0
      subnets:
      - cidr: 172.20.32.0/19
        name: us-east-1a
        type: Public
        zone: us-east-1a
      - cidr: 172.20.64.0/19
        name: us-east-1b
        type: Public
        zone: us-east-1b
      - cidr: 172.20.96.0/19
        name: us-east-1c
        type: Public
        zone: us-east-1c
      topology:
        dns:
          type: Public
        masters: public
        nodes: public
    

  3. edit node instance group: kops edit ig --name=useast1.dev.pykey.com nodes:
    apiVersion: kops.k8s.io/v1alpha2
    kind: InstanceGroup
    metadata:
      creationTimestamp: "2020-09-01T23:17:45Z"
      labels:
        kops.k8s.io/cluster: useast1.dev.pykey.com
      name: nodes
    spec:
      image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
      machineType: t3.medium
      maxSize: 2
      minSize: 2
      nodeLabels:
        kops.k8s.io/instancegroup: nodes
      role: Node
      subnets:
      - us-east-1a
      - us-east-1b
      - us-east-1c      
    

    Modify it:

      machineType: t3.medium
      maxSize: 2
      minSize: 2
    =>
      machineType: t2.nano
      maxSize: 1
      minSize: 1    
    

  4. edit master instance group: kops edit ig --name=useast1.dev.pykey.com master-us-east-1a:
    apiVersion: kops.k8s.io/v1alpha2
    kind: InstanceGroup
    metadata:
      creationTimestamp: "2020-09-03T21:24:06Z"
      labels:
        kops.k8s.io/cluster: useast1.dev.pykey.com
      name: master-us-east-1a
    spec:
      image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
      machineType: t3.medium
      maxSize: 1
      minSize: 1
      nodeLabels:
        kops.k8s.io/instancegroup: master-us-east-1a
      role: Master
      subnets:
      - us-east-1a 
    


  5. Finally configure cluster with: kops update cluster --name useast1.dev.pykey.com --yes
    See next section.

If this is the first time using kops, do spend a few minutes to try those out! An instance group is a set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.







Create the cluster in AWS

Run kops update cluster to preview what it is going to do our cluster in AWS:

$ kops update cluster useast1.dev.pykey.com
...

Now we'll use the same command, kops update cluster but with --yes to actually create the cluster:

$ kops update cluster useast1.dev.pykey.com --yes
I0903 14:44:07.068796   49581 executor.go:103] Tasks: 0 done / 91 total; 43 can run
I0903 14:44:08.166320   49581 vfs_castore.go:590] Issuing new certificate: "ca"
I0903 14:44:08.232347   49581 vfs_castore.go:590] Issuing new certificate: "apiserver-aggregator-ca"
I0903 14:44:08.345192   49581 vfs_castore.go:590] Issuing new certificate: "etcd-manager-ca-events"
I0903 14:44:08.417344   49581 vfs_castore.go:590] Issuing new certificate: "etcd-manager-ca-main"
I0903 14:44:08.483010   49581 vfs_castore.go:590] Issuing new certificate: "etcd-clients-ca"
I0903 14:44:08.638627   49581 vfs_castore.go:590] Issuing new certificate: "etcd-peers-ca-main"
I0903 14:44:08.725188   49581 vfs_castore.go:590] Issuing new certificate: "etcd-peers-ca-events"
I0903 14:44:10.040967   49581 executor.go:103] Tasks: 43 done / 91 total; 28 can run
I0903 14:44:11.128125   49581 vfs_castore.go:590] Issuing new certificate: "kube-controller-manager"
I0903 14:44:11.263449   49581 vfs_castore.go:590] Issuing new certificate: "kubelet"
I0903 14:44:11.308451   49581 vfs_castore.go:590] Issuing new certificate: "master"
I0903 14:44:11.571119   49581 vfs_castore.go:590] Issuing new certificate: "kube-proxy"
I0903 14:44:11.590496   49581 vfs_castore.go:590] Issuing new certificate: "kube-scheduler"
I0903 14:44:11.687091   49581 vfs_castore.go:590] Issuing new certificate: "kops"
I0903 14:44:11.910723   49581 vfs_castore.go:590] Issuing new certificate: "apiserver-proxy-client"
I0903 14:44:11.937927   49581 vfs_castore.go:590] Issuing new certificate: "kubelet-api"
I0903 14:44:11.988192   49581 vfs_castore.go:590] Issuing new certificate: "apiserver-aggregator"
I0903 14:44:11.988308   49581 vfs_castore.go:590] Issuing new certificate: "kubecfg"
I0903 14:44:13.140444   49581 executor.go:103] Tasks: 71 done / 91 total; 18 can run
I0903 14:44:14.250941   49581 launchconfiguration.go:375] waiting for IAM instance profile "nodes.useast1.dev.pykey.com" to be ready
I0903 14:44:14.296649   49581 launchconfiguration.go:375] waiting for IAM instance profile "masters.useast1.dev.pykey.com" to be ready
I0903 14:44:25.409587   49581 executor.go:103] Tasks: 89 done / 91 total; 2 can run
I0903 14:44:26.361856   49581 executor.go:103] Tasks: 91 done / 91 total; 0 can run
I0903 14:44:26.364272   49581 dns.go:156] Pre-creating DNS records
I0903 14:44:27.953961   49581 update_cluster.go:308] Exporting kubecfg for cluster
kops has set your kubectl context to useast1.dev.pykey.com

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa ubuntu@api.useast1.dev.pykey.com
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/operations/addons. 

This may take a few seconds to run, but then our cluster will likely take a few minutes to actually be ready. The kops update cluster will be the tool we'll use whenever we change the configuration of our cluster. It applies the changes we have made to the configuration to our cluster - reconfiguring AWS or kubernetes as needed.

For example, after we kops edit ig nodes, then kops update cluster --yes to apply our configuration, and sometimes we will also have to kops rolling-update cluster to roll out the configuration immediately.






Validate the cluster

Now that our cluster is ready, let's check:

$ kops validate cluster
Using cluster from kubectl context: useast1.dev.pykey.com

Validating cluster useast1.dev.pykey.com

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	t3.medium	1	1	us-east-1a
nodes			Node	t2.nano		2	2	us-east-1a,us-east-1b,us-east-1c

NODE STATUS
NAME				ROLE	READY
ip-172-20-42-41.ec2.internal	master	True
ip-172-20-63-75.ec2.internal	node	True
ip-172-20-97-84.ec2.internal	node	True

Your cluster useast1.dev.pykey.com is ready


list nodes:

$ kubectl get nodes 
NAME                           STATUS   ROLES    AGE     VERSION
ip-172-20-42-41.ec2.internal   Ready    master   4m14s   v1.18.8
ip-172-20-63-75.ec2.internal   Ready    node     2m48s   v1.18.8
ip-172-20-97-84.ec2.internal   Ready    node     2m52s   v1.18.8

$ kubectl get nodes --show-labels
NAME                           STATUS   ROLES    AGE     VERSION   LABELS
ip-172-20-42-41.ec2.internal   Ready    master   4m35s   v1.18.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,kops.k8s.io/instancegroup=master-us-east-1a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-20-42-41.ec2.internal,kubernetes.io/os=linux,kubernetes.io/role=master,node-role.kubernetes.io/master=,node.kubernetes.io/instance-type=t3.medium,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1a
ip-172-20-63-75.ec2.internal   Ready    node     3m9s    v1.18.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.nano,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,kops.k8s.io/instancegroup=nodes,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-20-63-75.ec2.internal,kubernetes.io/os=linux,kubernetes.io/role=node,node-role.kubernetes.io/node=,node.kubernetes.io/instance-type=t2.nano,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1a
ip-172-20-97-84.ec2.internal   Ready    node     3m13s   v1.18.8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.nano,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1c,kops.k8s.io/instancegroup=nodes,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-20-97-84.ec2.internal,kubernetes.io/os=linux,kubernetes.io/role=node,node-role.kubernetes.io/node=,node.kubernetes.io/instance-type=t2.nano,topology.kubernetes.io/region=us-east-1,topology.kubernetes.io/zone=us-east-1c


ssh to the master:

$ ssh -i ~/.ssh/id_rsa ubuntu@api.useast1.dev.pykey.com
The authenticity of host 'api.useast1.dev.pykey.com (3.218.244.1)' can't be established.
ECDSA key fingerprint is SHA256:96RKZe6hI555FhPZloK0O5B21+7lO2nDY7YshB7UAcE.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'api.useast1.dev.pykey.com,3.218.244.1' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1018-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu Sep  3 21:56:13 UTC 2020

  System load:  0.08              Processes:                160
  Usage of /:   6.4% of 61.98GB   Users logged in:          0
  Memory usage: 24%               IPv4 address for docker0: 172.17.0.1
  Swap usage:   0%                IPv4 address for ens5:    172.20.42.41

55 updates can be installed immediately.
29 of these updates are security updates.
To see these additional updates run: apt list --upgradable



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-172-20-42-41:~$ 

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                   READY   STATUS    RESTARTS   AGE
kube-system   dns-controller-78f77977f-bxtsg                         1/1     Running   0          8m57s
kube-system   etcd-manager-events-ip-172-20-42-41.ec2.internal       1/1     Running   0          8m10s
kube-system   etcd-manager-main-ip-172-20-42-41.ec2.internal         1/1     Running   0          8m29s
kube-system   kops-controller-9ncgv                                  1/1     Running   0          8m1s
kube-system   kube-apiserver-ip-172-20-42-41.ec2.internal            2/2     Running   0          8m12s
kube-system   kube-controller-manager-ip-172-20-42-41.ec2.internal   1/1     Running   0          7m34s
kube-system   kube-dns-64f86fb8dd-9p8p4                              3/3     Running   0          7m22s
kube-system   kube-dns-64f86fb8dd-9q22z                              3/3     Running   0          8m57s
kube-system   kube-dns-autoscaler-cd7778b7b-xsxsx                    1/1     Running   0          8m57s
kube-system   kube-proxy-ip-172-20-42-41.ec2.internal                1/1     Running   0          7m8s
kube-system   kube-proxy-ip-172-20-63-75.ec2.internal                1/1     Running   0          6m56s
kube-system   kube-proxy-ip-172-20-97-84.ec2.internal                1/1     Running   0          6m50s
kube-system   kube-scheduler-ip-172-20-42-41.ec2.internal            1/1     Running   0          8m19s  


We now have two instances - 1 master and 1 node:

master-nodes-instances.png

Cluster info at S3:

s3-clusters-2.png

Autoscaling Group:

AutoScalingGroup.png




Deploy the dashboard

Dashboard is a web interface for Kubernetes.

Let's check if the dashboard is deployed:

$ kubectl get pods --all-namespaces | grep dashboard

Looks like the dashboard is missing, so we need to install the latest stable release by running the following command:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created    

$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                                    READY   STATUS    RESTARTS   AGE
kube-system            dns-controller-78f77977f-6t6fz                          1/1     Running   0          72m
kube-system            etcd-manager-events-ip-172-20-54-121.ec2.internal       1/1     Running   0          72m
kube-system            etcd-manager-main-ip-172-20-54-121.ec2.internal         1/1     Running   0          72m
kube-system            kops-controller-pgpjc                                   1/1     Running   0          72m
kube-system            kube-apiserver-ip-172-20-54-121.ec2.internal            2/2     Running   1          72m
kube-system            kube-controller-manager-ip-172-20-54-121.ec2.internal   1/1     Running   0          72m
kube-system            kube-dns-64f86fb8dd-7zsk8                               3/3     Running   0          71m
kube-system            kube-dns-64f86fb8dd-zw785                               3/3     Running   0          72m
kube-system            kube-dns-autoscaler-cd7778b7b-btv4j                     1/1     Running   0          72m
kube-system            kube-proxy-ip-172-20-49-53.ec2.internal                 1/1     Running   0          71m
kube-system            kube-proxy-ip-172-20-54-121.ec2.internal                1/1     Running   0          72m
kube-system            kube-scheduler-ip-172-20-54-121.ec2.internal            1/1     Running   0          72m
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-l68cf              1/1     Running   0          12m
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-c58gh                   1/1     Running   0          12m




Create a kops-admin service account and cluster role binding

By default, the Kubernetes Dashboard user has limited permissions. In this section, we create a kops-admin service account and cluster role binding that we can use to securely connect to the dashboard with admin-level permissions.

  1. Create a file called kops-admin-service-account.yaml:
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kops-admin
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: kops-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kops-admin
      namespace: kube-system    
    

    This manifest defines a service account and cluster role binding called kops-admin (we can name whatever we want).

  2. Apply the service account and cluster role binding to our cluster:
    $ kubectl apply -f kops-admin-service-account.yaml
    serviceaccount/kops-admin created
    clusterrolebinding.rbac.authorization.k8s.io/kops-admin created    
    
    
    $ kubectl get serviceaccounts --all-namespaces
    NAMESPACE              NAME                                 SECRETS   AGE
    default                default                              1         28m
    kube-node-lease        default                              1         28m
    kube-public            default                              1         28m
    kube-system            attachdetach-controller              1         28m
    kube-system            aws-cloud-provider                   1         28m
    kube-system            certificate-controller               1         28m
    kube-system            clusterrole-aggregation-controller   1         28m
    kube-system            cronjob-controller                   1         28m
    kube-system            daemon-set-controller                1         28m
    kube-system            default                              1         28m
    kube-system            deployment-controller                1         28m
    kube-system            disruption-controller                1         28m
    kube-system            dns-controller                       1         28m
    kube-system            endpoint-controller                  1         28m
    kube-system            endpointslice-controller             1         28m
    kube-system            expand-controller                    1         28m
    kube-system            generic-garbage-collector            1         28m
    kube-system            horizontal-pod-autoscaler            1         28m
    kube-system            job-controller                       1         28m
    kube-system            kops-admin                           1         2m23s
    kube-system            kops-controller                      1         28m
    kube-system            kube-dns                             1         28m
    kube-system            kube-dns-autoscaler                  1         28m
    kube-system            kube-proxy                           1         28m
    kube-system            namespace-controller                 1         28m
    kube-system            node-controller                      1         28m
    kube-system            persistent-volume-binder             1         28m
    kube-system            pod-garbage-collector                1         28m
    kube-system            pv-protection-controller             1         28m
    kube-system            pvc-protection-controller            1         28m
    kube-system            replicaset-controller                1         28m
    kube-system            replication-controller               1         28m
    kube-system            resourcequota-controller             1         28m
    kube-system            route-controller                     1         28m
    kube-system            service-account-controller           1         28m
    kube-system            service-controller                   1         28m
    kube-system            statefulset-controller               1         28m
    kube-system            ttl-controller                       1         28m
    kubernetes-dashboard   default                              1         10m
    kubernetes-dashboard   kubernetes-dashboard                 1         10m
    




Connect to the dashboard

Now that the Kubernetes Dashboard is deployed to our cluster, and we have an administrator service account that we can use to view and control our cluster, we can connect to the dashboard with that service account.

Retrieve an authentication token for the kops-admin service account. Copy the "authentication_token" value from the output. We use this token to connect to the dashboard:

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kops-admin | awk '{print $1}')
Name:         kops-admin-token-ffbm6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kops-admin
              kubernetes.io/service-account.uid: 8ea467e6-fe18-40de-a288-ccb9a07bdd24

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1042 bytes
namespace:  11 bytes
token:      eyJhbGc...MImg    

Start the kubectl proxy:

$ kubectl proxy


To access the dashboard endpoint, open the following link with a web browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login


Choose Token, paste the "authentication_token" output from the previous command into the Token field, and choose SIGN IN:

dashboard-token-input.png

After we have connected to our Kubernetes Dashboard, we can view and control your cluster using your kops-admin service account.

nodes-on-dashboard-2.png
kops-namespaces.png



Deploy a simple service via NodePort

At this point, we haven't deployed any apps:

$ kubectl get pods
No resources found in default namespace.

Now, we may want to deploy a simple service using this deployment (hello.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  selector:
    matchLabels:
      run: load-balancer-example
  replicas: 2
  template:
    metadata:
      labels:
        run: load-balancer-example
    spec:
      containers:
        - name: hello-world
          image: gcr.io/google-samples/node-hello:1.0
          ports:
            - containerPort: 8080
              protocol: TCP    

Run a Hello World application in our cluster. Create the application deployment:

$ kubectl apply -f hello.yaml
deployment.apps/hello-world created   

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
hello-world-86d6c6f84d-nkm2d   1/1     Running   0          2m42s
hello-world-86d6c6f84d-qw95v   1/1     Running   0          2m42s

$ kubectl get deployment
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
hello-world   2/2     2            2           2m50s

$ kubectl get rs
NAME                     DESIRED   CURRENT   READY   AGE
hello-world-86d6c6f84d   2         2         2       4m46s

The command created a Deployment and an associated ReplicaSet. The ReplicaSet has two Pods each of which runs the Hello World application.


Create a Service object that exposes the deployment:

$ kubectl expose deployment hello-world --type=NodePort --name=example-service
service/example-service exposed

$ kubectl get services -o wide
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     SELECTOR
example-service   NodePort    100.71.100.203   <none>        8080:32438/TCP   49s     run=load-balancer-example
kubernetes        ClusterIP   100.64.0.1       <none>        443/TCP          5h27m   <none>

List the pods that are running the Hello World application:

$ kubectl get pods --selector="run=load-balancer-example" --output=wide
NAME                           READY   STATUS    RESTARTS   AGE     IP           NODE                           NOMINATED NODE   READINESS GATES
hello-world-86d6c6f84d-9m6gc   1/1     Running   0          10m     100.96.1.3   ip-172-20-97-84.ec2.internal   <none>           <none>
hello-world-86d6c6f84d-zw287   1/1     Running   0          10m     100.96.2.4   ip-172-20-63-75.ec2.internal   <none>           <none>    

Get the public IP address of one of our nodes that is running a Hello World pod. Use the node address and node port to access the Hello World application:

curl http://<public-node-ip>:<node-port>
public_IP_of_Node_2.png


After modifying the inbound rule to allow TCP from the :node-port, we get the following page:

Hello-Kubernetes-App-NodePort-2.png

We can see the pods/services/deployments/replicasets on Web-UI:

Hello-world-on-UI.png Hello-world-on-UI-2.png

To delete the Service, enter this command:

$ kubectl delete services example-service
service "example-service" deleted

To delete the Deployment, the ReplicaSet, and the Pods that are running the Hello World application, enter this command:

$ kubectl delete -f hello.yaml 
deployment.apps "hello-world" deleted




Deploy a simple service via LoadBalancer

nginx-deploy.yam:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx

  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: "nginxdemos/hello"
---
kind: Service
apiVersion: v1

metadata:
  name: nginx-elb
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30008    

Deploy:

$ kubectl apply -f nginx-deploy.yaml 
deployment.apps/nginx created
service/nginx-elb created

$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE                           NOMINATED NODE   READINESS GATES
nginx-66d58cb8c4-8mdr9   1/1     Running   0          43m   100.96.2.5   ip-172-20-63-75.ec2.internal   <none>           <none>
nginx-66d58cb8c4-nxjfw   1/1     Running   0          44m   100.96.1.4   ip-172-20-97-84.ec2.internal   <none>           <none>

$ kubectl get svc 
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
kubernetes   ClusterIP      100.64.0.1      <none>                                                                    443/TCP        105m
nginx-elb    LoadBalancer   100.67.53.146   aa09b8bc3ffb4483ebb29442642c1a2b-1326065343.us-east-1.elb.amazonaws.com   80:30008/TCP   2m5s

$ kubectl describe svc nginx-elb 
Name:                     nginx-elb
Namespace:                default
Labels:                   <none>
Annotations:              Selector:  app=nginx
Type:                     LoadBalancer
IP:                       100.67.53.146
LoadBalancer Ingress:     aa09b8bc3ffb4483ebb29442642c1a2b-1326065343.us-east-1.elb.amazonaws.com
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  30008/TCP
Endpoints:                100.96.1.4:80,100.96.2.5:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age    From                Message
  ----    ------                ----   ----                -------
  Normal  EnsuringLoadBalancer  9m15s  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   9m13s  service-controller  Ensured load balancer

In the description of the LoadBalancer service, we see the LoadBalancer Ingress property, which is the DNS name we'll use to connect to our web service. The load balancer exposes the port 80 and redirects this traffic to the kubernetes node port 30008. This node will then redirect the traffic to the nginx container.



LoadBalancer-AWS.png
LoadBalancer-AWS-listeners.png

To test if the DNS works we just need to use the LoadBalancer Ingress address:

LB-Nginx-1.png

Usually we don't use LoadBalancer Ingress directly, instead we create a CNAME record with a domain (like dev.pykey.com) which points to the load balancer dns name:

LB-DNS-name.png
Route53-elb-1.png
Route53-elb-2.png


Now that our CNAME for kops.dev.pykey.com is pointing to the LB address, let's try it:

kops-dev-pykey-com-lb-10-96-1-4.png
kops-dev-pykey-com-lb-10-96-2-5.png

We can also see the load balancing is working. Compare the IPs with the server addresses of the browser:

$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE                           NOMINATED NODE   READINESS GATES
nginx-66d58cb8c4-8mdr9   1/1     Running   0          43m   100.96.2.5   ip-172-20-63-75.ec2.internal   <none>           <none>
nginx-66d58cb8c4-nxjfw   1/1     Running   0          44m   100.96.1.4   ip-172-20-97-84.ec2.internal   <none>           <none>    


Cleanup

To delete our cluster:

$ kops delete cluster useast1.dev.pykey.com --yes

Delete S3 bucket:

$ aws s3 rb s3://clusters.dev.pykey.com    


For Nginx controller / Ingress with LoadBalancer type service on AWS, check Docker & Kubernetes - Ingress controller on AWS with Kops





Refs
  1. Installing Kubernetes with kops
  2. kubernetes/kops
  3. Getting Started with kops on AWS
  4. Install and Set Up kubectl
  5. Tutorial: Deploy the Kubernetes Dashboard (web UI)
  6. Use a Service to Access an Application in a Cluster
  7. https://github.com/nginxinc/NGINX-Demos/tree/master/nginx-hello








Docker & K8s

  1. Docker install on Amazon Linux AMI
  2. Docker install on EC2 Ubuntu 14.04
  3. Docker container vs Virtual Machine
  4. Docker install on Ubuntu 14.04
  5. Docker Hello World Application
  6. Nginx image - share/copy files, Dockerfile
  7. Working with Docker images : brief introduction
  8. Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)
  9. More on docker run command (docker run -it, docker run --rm, etc.)
  10. Docker Networks - Bridge Driver Network
  11. Docker Persistent Storage
  12. File sharing between host and container (docker run -d -p -v)
  13. Linking containers and volume for datastore
  14. Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context
  15. Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching
  16. Dockerfile - Build Docker images automatically III - RUN
  17. Dockerfile - Build Docker images automatically IV - CMD
  18. Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT
  19. Docker - Apache Tomcat
  20. Docker - NodeJS
  21. Docker - NodeJS with hostname
  22. Docker Compose - NodeJS with MongoDB
  23. Docker - Prometheus and Grafana with Docker-compose
  24. Docker - StatsD/Graphite/Grafana
  25. Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers
  26. Docker : NodeJS with GCP Kubernetes Engine
  27. Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github
  28. Docker : Jenkins Master and Slave
  29. Docker - ELK : ElasticSearch, Logstash, and Kibana
  30. Docker - ELK 7.6 : Elasticsearch on Centos 7
  31. Docker - ELK 7.6 : Filebeat on Centos 7
  32. Docker - ELK 7.6 : Logstash on Centos 7
  33. Docker - ELK 7.6 : Kibana on Centos 7
  34. Docker - ELK 7.6 : Elastic Stack with Docker Compose
  35. Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube
  36. Docker - Deploy Elastic Stack via Helm on minikube
  37. Docker Compose - A gentle introduction with WordPress
  38. Docker Compose - MySQL
  39. MEAN Stack app on Docker containers : micro services
  40. MEAN Stack app on Docker containers : micro services via docker-compose
  41. Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)
  42. Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)
  43. Docker Compose - Hashicorp's Vault and Consul Part C (Consul)
  44. Docker Compose with two containers - Flask REST API service container and an Apache server container
  45. Docker compose : Nginx reverse proxy with multiple containers
  46. Docker & Kubernetes : Envoy - Getting started
  47. Docker & Kubernetes : Envoy - Front Proxy
  48. Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes
  49. Docker Packer
  50. Docker Cheat Sheet
  51. Docker Q & A #1
  52. Kubernetes Q & A - Part I
  53. Kubernetes Q & A - Part II
  54. Docker - Run a React app in a docker
  55. Docker - Run a React app in a docker II (snapshot app with nginx)
  56. Docker - NodeJS and MySQL app with React in a docker
  57. Docker - Step by Step NodeJS and MySQL app with React - I
  58. Installing LAMP via puppet on Docker
  59. Docker install via Puppet
  60. Nginx Docker install via Ansible
  61. Apache Hadoop CDH 5.8 Install with QuickStarts Docker
  62. Docker - Deploying Flask app to ECS
  63. Docker Compose - Deploying WordPress to AWS
  64. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)
  65. Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI Fargate type)
  66. Docker - ECS Fargate
  67. Docker - AWS ECS service discovery with Flask and Redis
  68. Docker & Kubernetes : minikube
  69. Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume
  70. Docker & Kubernetes 3 : minikube Django with Redis and Celery
  71. Docker & Kubernetes 4 : Django with RDS via AWS Kops
  72. Docker & Kubernetes : Kops on AWS
  73. Docker & Kubernetes : Ingress controller on AWS with Kops
  74. Docker & Kubernetes : HashiCorp's Vault and Consul on minikube
  75. Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine
  76. Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations
  77. Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning
  78. Docker & Kubernetes : DaemonSet
  79. Docker & Kubernetes : Secrets
  80. Docker & Kubernetes : kubectl command
  81. Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster
  82. Docker & Kubernetes : Configure a Pod to Use a ConfigMap
  83. AWS : EKS (Elastic Container Service for Kubernetes)
  84. Docker & Kubernetes : Run a React app in a minikube
  85. Docker & Kubernetes : Minikube install on AWS EC2
  86. Docker & Kubernetes : Cassandra with a StatefulSet
  87. Docker & Kubernetes : Terraform and AWS EKS
  88. Docker & Kubernetes : Pods and Service definitions
  89. Docker & Kubernetes : Service IP and the Service Type
  90. Docker & Kubernetes : Kubernetes DNS with Pods and Services
  91. Docker & Kubernetes : Headless service and discovering pods
  92. Docker & Kubernetes : Scaling and Updating application
  93. Docker & Kubernetes : Horizontal pod autoscaler on minikubes
  94. Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes
  95. Docker & Kubernetes : Rolling updates
  96. Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)
  97. Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes
  98. Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes
  99. Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress
  100. Docker & Kubernetes : MongoDB / MongoExpress on Minikube
  101. Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes
  102. Docker & Kubernetes : MongoDB with StatefulSets on GCP Kubernetes Engine
  103. Docker & Kubernetes : Nginx Ingress Controller on Minikube
  104. Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)
  105. Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube
  106. Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes
  107. Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS
  108. Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes
  109. Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)
  110. Docker & Kubernetes : StatefulSets on minikube
  111. Docker & Kubernetes : RBAC
  112. Docker & Kubernetes Service Account, RBAC, and IAM
  113. Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1
  114. Docker & Kubernetes : Helm Chart
  115. Docker & Kubernetes : My first Helm deploy
  116. Docker & Kubernetes : Readiness and Liveness Probes
  117. Docker & Kubernetes : Helm chart repository with Github pages
  118. Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart
  119. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart
  120. Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart
  121. Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress
  122. Docker & Kubernetes : Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box
  123. Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart
  124. Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes
  125. Docker & Kubernetes : Istio on EKS
  126. Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application
  127. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)
  128. Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)
  129. Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine
  130. Docker & Kubernetes : Deploying Memcached on Kubernetes Engine
  131. Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus
  132. Docker & Kubernetes : Spinnaker on EKS with Halyard
  133. Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine
  134. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-dind (docker-in-docker)
  135. Docker & Kubernetes : Multi-node Local Kubernetes cluster : Kubeadm-kind (k8s-in-docker)
  136. Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes
  137. Docker & Kubernetes : Jenkins-X on EKS
  138. Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes
  139. Docker & Kubernetes : ArgoCD on Kubernetes cluster
  140. Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook


Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong





DevOps



Phases of Continuous Integration

Software development methodology

Introduction to DevOps

Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

Artifact repository and repository management

Linux - General, shell programming, processes & signals ...

RabbitMQ...

MariaDB

New Relic APM with NodeJS : simple agent setup on AWS instance

Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

Datadog - Monitoring with PagerDuty/HipChat and APM

Install and Configure Mesos Cluster

Cassandra on a Single-Node Cluster

Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

OpenStack install on Ubuntu 16.04 server - DevStack

AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

CI/CD with CircleCI - Heroku deploy

Introduction to Terraform with AWS elb & nginx

Docker & Kubernetes

Kubernetes I - Running Kubernetes Locally via Minikube

Kubernetes II - kops on AWS

Kubernetes III - kubeadm on AWS

AWS : EKS (Elastic Container Service for Kubernetes)

CI/CD Github actions

CI/CD Gitlab



DevOps / Sys Admin Q & A



(1A) - Linux Commands

(1B) - Linux Commands

(2) - Networks

(2B) - Networks

(3) - Linux Systems

(4) - Scripting (Ruby/Shell)

(5) - Configuration Management

(6) - AWS VPC setup (public/private subnets with NAT)

(6B) - AWS VPC Peering

(7) - Web server

(8) - Database

(9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

(10) - Trouble Shooting: Load, Throughput, Response time and Leaks

(11) - SSH key pairs, SSL Certificate, and SSL Handshake

(12) - Why is the database slow?

(13) - Is my web site down?

(14) - Is my server down?

(15) - Why is the server sluggish?

(16A) - Serving multiple domains using Virtual Hosts - Apache

(16B) - Serving multiple domains using server block - Nginx

(16C) - Reverse proxy servers and load balancers - Nginx

(17) - Linux startup process

(18) - phpMyAdmin with Nginx virtual host as a subdomain

(19) - How to SSH login without password?

(20) - Log Rotation

(21) - Monitoring Metrics

(22) - lsof

(23) - Wireshark introduction

(24) - User account management

(25) - Domain Name System (DNS)

(26) - NGINX SSL/TLS, Caching, and Session

(27) - Troubleshooting 5xx server errors

(28) - Linux Systemd: journalctl

(29) - Linux Systemd: FirewallD

(30) - Linux: SELinux

(31) - Linux: Samba

(0) - Linux Sys Admin's Day to Day tasks



Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Docker & K8s



Docker install on Amazon Linux AMI

Docker install on EC2 Ubuntu 14.04

Docker container vs Virtual Machine

Docker install on Ubuntu 14.04

Docker Hello World Application

Nginx image - share/copy files, Dockerfile

Working with Docker images : brief introduction

Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

More on docker run command (docker run -it, docker run --rm, etc.)

Docker Networks - Bridge Driver Network

Docker Persistent Storage

File sharing between host and container (docker run -d -p -v)

Linking containers and volume for datastore

Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

Dockerfile - Build Docker images automatically III - RUN

Dockerfile - Build Docker images automatically IV - CMD

Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

Docker - Apache Tomcat

Docker - NodeJS

Docker - NodeJS with hostname

Docker Compose - NodeJS with MongoDB

Docker - Prometheus and Grafana with Docker-compose

Docker - StatsD/Graphite/Grafana

Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

Docker : NodeJS with GCP Kubernetes Engine

Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

Docker : Jenkins Master and Slave

Docker - ELK : ElasticSearch, Logstash, and Kibana

Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

Docker - ELK 7.6 : Logstash on Centos 7

Docker - ELK 7.6 : Kibana on Centos 7 Part 1

Docker - ELK 7.6 : Kibana on Centos 7 Part 2

Docker - ELK 7.6 : Elastic Stack with Docker Compose

Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

Docker - Deploy Elastic Stack via Helm on minikube

Docker Compose - A gentle introduction with WordPress

Docker Compose - MySQL

MEAN Stack app on Docker containers : micro services

Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

Docker Compose with two containers - Flask REST API service container and an Apache server container

Docker compose : Nginx reverse proxy with multiple containers

Docker compose : Nginx reverse proxy with multiple containers

Docker & Kubernetes : Envoy - Getting started

Docker & Kubernetes : Envoy - Front Proxy

Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

Docker Packer

Docker Cheat Sheet

Docker Q & A

Kubernetes Q & A - Part I

Kubernetes Q & A - Part II

Docker - Run a React app in a docker

Docker - Run a React app in a docker II (snapshot app with nginx)

Docker - NodeJS and MySQL app with React in a docker

Docker - Step by Step NodeJS and MySQL app with React - I

Installing LAMP via puppet on Docker

Docker install via Puppet

Nginx Docker install via Ansible

Apache Hadoop CDH 5.8 Install with QuickStarts Docker

Docker - Deploying Flask app to ECS

Docker Compose - Deploying WordPress to AWS

Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

Docker - ECS Fargate

Docker - AWS ECS service discovery with Flask and Redis

Docker & Kubernetes: minikube version: v1.31.2, 2023

Docker & Kubernetes 1 : minikube

Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

Docker & Kubernetes 3 : minikube Django with Redis and Celery

Docker & Kubernetes 4 : Django with RDS via AWS Kops

Docker & Kubernetes : Kops on AWS

Docker & Kubernetes : Ingress controller on AWS with Kops

Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

Docker & Kubernetes : DaemonSet

Docker & Kubernetes : Secrets

Docker & Kubernetes : kubectl command

Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

Docker & Kubernetes : Configure a Pod to Use a ConfigMap

AWS : EKS (Elastic Container Service for Kubernetes)

Docker & Kubernetes : Run a React app in a minikube

Docker & Kubernetes : Minikube install on AWS EC2

Docker & Kubernetes : Cassandra with a StatefulSet

Docker & Kubernetes : Terraform and AWS EKS

Docker & Kubernetes : Pods and Service definitions

Docker & Kubernetes : Headless service and discovering pods

Docker & Kubernetes : Service IP and the Service Type

Docker & Kubernetes : Kubernetes DNS with Pods and Services

Docker & Kubernetes - Scaling and Updating application

Docker & Kubernetes : Horizontal pod autoscaler on minikubes

Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

Docker & Kubernetes : Rolling updates

Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

Docker & Kubernetes : Nginx Ingress Controller on minikube

Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

Docker & Kubernetes : MongoDB / MongoExpress on Minikube

Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : StatefulSets on minikube

Docker & Kubernetes : RBAC

Docker & Kubernetes Service Account, RBAC, and IAM

Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

Docker & Kubernetes : Helm Chart

Docker & Kubernetes : My first Helm deploy

Docker & Kubernetes : Readiness and Liveness Probes

Docker & Kubernetes : Helm chart repository with Github pages

Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

Docker & Kubernetes : Istio on EKS

Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

Docker & Kubernetes : Spinnaker on EKS with Halyard

Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

Docker & Kubernetes : Jenkins-X on EKS

Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

Docker & Kubernetes : ArgoCD on Kubernetes cluster

Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook





Ansible 2.0



What is Ansible?

Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

SSH connection & running commands

Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

Modules

Playbooks

Handlers

Roles

Playbook for LAMP HAProxy

Installing Nginx on a Docker container

AWS : Creating an ec2 instance & adding keys to authorized_keys

AWS : Auto Scaling via AMI

AWS : creating an ELB & registers an EC2 instance from the ELB

Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

Setting up Apache web server

Deploying a Go app to Minikube

Ansible with Terraform





Terraform



Introduction to Terraform with AWS elb & nginx

Terraform Tutorial - terraform format(tf) and interpolation(variables)

Terraform Tutorial - user_data

Terraform Tutorial - variables

Terraform 12 Tutorial - Loops with count, for_each, and for

Terraform Tutorial - creating multiple instances (count, list type and element() function)

Terraform Tutorial - State (terraform.tfstate) & terraform import

Terraform Tutorial - Output variables

Terraform Tutorial - Destroy

Terraform Tutorial - Modules

Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

Terraform Tutorial - AWS ASG and Modules

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

Terraform Tutorial - AWS ECS using Fargate : Part I

Hashicorp Vault

HashiCorp Vault Agent

HashiCorp Vault and Consul on AWS with Terraform

Ansible with Terraform

AWS IAM user, group, role, and policies - part 1

AWS IAM user, group, role, and policies - part 2

Delegate Access Across AWS Accounts Using IAM Roles

AWS KMS

terraform import & terraformer import

Terraform commands cheat sheet

Terraform Cloud

Terraform 14

Creating Private TLS Certs





AWS (Amazon Web Services)



AWS : EKS (Elastic Container Service for Kubernetes)

AWS : Creating a snapshot (cloning an image)

AWS : Attaching Amazon EBS volume to an instance

AWS : Adding swap space to an attached volume via mkswap and swapon

AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

AWS : Creating an instance to a new region by copying an AMI

AWS : S3 (Simple Storage Service) 1

AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

AWS : S3 (Simple Storage Service) 4 - Uploading a large file

AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

AWS : Creating a CloudFront distribution with an Amazon S3 origin

AWS : Creating VPC with CloudFormation

WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

AWS : CloudWatch & Logs with Lambda Function / S3

AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

AWS : Lambda and SNS - cross account

AWS : CLI (Command Line Interface)

AWS : CLI (ECS with ALB & autoscaling)

AWS : ECS with cloudformation and json task definition

AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

AWS : Load Balancing with HAProxy (High Availability Proxy)

AWS : VirtualBox on EC2

AWS : NTP setup on EC2

AWS: jq with AWS

AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

AWS : OpenVPN Access Server 2 Install

AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

AWS : Autoscaling group (ASG)

AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

AWS : Adding a SSH User Account on Linux Instance

AWS : Windows Servers - Remote Desktop Connections using RDP

AWS : Scheduled stopping and starting an instance - python & cron

AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

AWS : Elastic Beanstalk with NodeJS

AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

AWS : Identity and Access Management (IAM) Roles for Amazon EC2

AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

AWS : Identity and Access Management (IAM) sts assume role via aws cli2

AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

AWS : Amazon Route 53

AWS : Amazon Route 53 - DNS (Domain Name Server) setup

AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

AWS Amazon Route 53 : Private Hosted Zone

AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

AWS : Lambda with AWS CloudTrail

AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

AWS : Redshift data warehouse

AWS : CloudFormation - templates, change sets, and CLI

AWS : CloudFormation Bootstrap UserData/Metadata

AWS : CloudFormation - Creating an ASG with rolling update

AWS : Cloudformation Cross-stack reference

AWS : OpsWorks

AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

AWS CodeDeploy : Deploy an Application from GitHub

AWS EC2 Container Service (ECS)

AWS EC2 Container Service (ECS) II

AWS Hello World Lambda Function

AWS Lambda Function Q & A

AWS Node.js Lambda Function & API Gateway

AWS API Gateway endpoint invoking Lambda function

AWS API Gateway invoking Lambda function with Terraform

AWS API Gateway invoking Lambda function with Terraform - Lambda Container

Amazon Kinesis Streams

Kinesis Data Firehose with Lambda and ElasticSearch

Amazon DynamoDB

Amazon DynamoDB with Lambda and CloudWatch

Loading DynamoDB stream to AWS Elasticsearch service with Lambda

Amazon ML (Machine Learning)

Simple Systems Manager (SSM)

AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

AWS : RDS Importing and Exporting SQL Server Data

AWS : RDS PostgreSQL & pgAdmin III

AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

AWS : MySQL Replication : Master-slave

AWS : MySQL backup & restore

AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

AWS : Restoring Postgres on EC2 instance from S3 backup

AWS : Q & A

AWS : Security

AWS : Security groups vs. network ACLs

AWS : Scaling-Up

AWS : Networking

AWS : Single Sign-on (SSO) with Okta

AWS : JIT (Just-in-Time) with Okta



Jenkins



Install

Configuration - Manage Jenkins - security setup

Adding job and build

Scheduling jobs

Managing_plugins

Git/GitHub plugins, SSH keys configuration, and Fork/Clone

JDK & Maven setup

Build configuration for GitHub Java application with Maven

Build Action for GitHub Java application with Maven - Console Output, Updating Maven

Commit to changes to GitHub & new test results - Build Failure

Commit to changes to GitHub & new test results - Successful Build

Adding code coverage and metrics

Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

Jenkins on EC2 - Creating a Maven project

Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

Jenkins on EC2 - Line Coverage with JaCoCo plugin

Setting up Master and Slave nodes

Jenkins Build Pipeline & Dependency Graph Plugins

Jenkins Build Flow Plugin

Pipeline Jenkinsfile with Classic / Blue Ocean

Jenkins Setting up Slave nodes on AWS

Jenkins Q & A





Puppet



Puppet with Amazon AWS I - Puppet accounts

Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

Puppet with Amazon AWS III - Puppet running Hello World

Puppet Code Basics - Terminology

Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

Puppet master /agent ubuntu 14.04 install on EC2 nodes

Puppet master post install tasks - master's names and certificates setup,

Puppet agent post install tasks - configure agent, hostnames, and sign request

EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

EC2 Puppet - Install lamp with a manifest ('puppet apply')

EC2 Puppet - Install lamp with a module

Puppet variable scope

Puppet packages, services, and files

Puppet packages, services, and files II with nginx Puppet templates

Puppet creating and managing user accounts with SSH access

Puppet Locking user accounts & deploying sudoers file

Puppet exec resource

Puppet classes and modules

Puppet Forge modules

Puppet Express

Puppet Express 2

Puppet 4 : Changes

Puppet --configprint

Puppet with Docker

Puppet 6.0.2 install on Ubuntu 18.04





Chef



What is Chef?

Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

Setting up Hosted Chef server

VirtualBox via Vagrant with Chef client provision

Creating and using cookbooks on a VirtualBox node

Chef server install on Ubuntu 14.04

Chef workstation setup on EC2 Ubuntu 14.04

Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





Elasticsearch search engine, Logstash, and Kibana



Elasticsearch, search engine

Logstash with Elasticsearch

Logstash, Elasticsearch, and Kibana 4

Elasticsearch with Redis broker and Logstash Shipper and Indexer

Samples of ELK architecture

Elasticsearch indexing performance



Vagrant



VirtualBox & Vagrant install on Ubuntu 14.04

Creating a VirtualBox using Vagrant

Provisioning

Networking - Port Forwarding

Vagrant Share

Vagrant Rebuild & Teardown

Vagrant & Ansible





GCP (Google Cloud Platform)



GCP: Creating an Instance

GCP: gcloud compute command-line tool

GCP: Deploying Containers

GCP: Kubernetes Quickstart

GCP: Deploying a containerized web application via Kubernetes

GCP: Django Deploy via Kubernetes I (local)

GCP: Django Deploy via Kubernetes II (GKE)





Big Data & Hadoop Tutorials



Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

Hadoop - Running MapReduce Job

Hadoop - Ecosystem

CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

CDH5 APIs

QuickStart VMs for CDH 5.3

QuickStart VMs for CDH 5.3 II - Testing with wordcount

QuickStart VMs for CDH 5.3 II - Hive DB query

Scheduled start and stop CDH services

CDH 5.8 Install with QuickStarts Docker

Zookeeper & Kafka Install

Zookeeper & Kafka - single node single broker

Zookeeper & Kafka - Single node and multiple brokers

OLTP vs OLAP

Apache Hadoop Tutorial I with CDH - Overview

Apache Hadoop Tutorial II with CDH - MapReduce Word Count

Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

Apache Hadoop (CDH 5) Hive Introduction

CDH5 - Hive Upgrade to 1.3 to from 1.2

Apache Hive 2.1.0 install on Ubuntu 16.04

Apache HBase in Pseudo-Distributed mode

Creating HBase table with HBase shell and HUE

Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

Creating HBase table with Java API

HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

Flume with CDH5: a single-node Flume deployment (telnet example)

Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

List of Apache Hadoop hdfs commands

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

Apache Hadoop : Creating Wordcount Maven Project with Eclipse

Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

Spark 1.2 using VirtualBox and QuickStart VM - wordcount

Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

Apache Spark 2.0.2 tutorial with PySpark : RDD

Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

Apache Spark Streaming with Kafka and Cassandra

Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

Apache Spark 1.2 Streaming

Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

Apache Drill - Query File System, JSON, and Parquet

Apache Drill - HBase query

Apache Drill - Hive query

Apache Drill - MongoDB query





Redis In-Memory Database



Redis vs Memcached

Redis 3.0.1 Install

Setting up multiple server instances on a Linux host

Redis with Python

ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer





Powershell 4 Tutorial



Powersehll : Introduction

Powersehll : Help System

Powersehll : Running commands

Powersehll : Providers

Powersehll : Pipeline

Powersehll : Objects

Powershell : Remote Control

Windows Management Instrumentation (WMI)

How to Enable Multiple RDP Sessions in Windows 2012 Server

How to install and configure FTP server on IIS 8 in Windows 2012 Server

How to Run Exe as a Service on Windows 2012 Server

SQL Inner, Left, Right, and Outer Joins





Git/GitHub Tutorial



One page express tutorial for GIT and GitHub

Installation

add/status/log

commit and diff

git commit --amend

Deleting and Renaming files

Undoing Things : File Checkout & Unstaging

Reverting commit

Soft Reset - (git reset --soft <SHA key>)

Mixed Reset - Default

Hard Reset - (git reset --hard <SHA key>)

Creating & switching Branches

Fast-forward merge

Rebase & Three-way merge

Merge conflicts with a simple example

GitHub Account and SSH

Uploading to GitHub

GUI

Branching & Merging

Merging conflicts

GIT on Ubuntu and OS X - Focused on Branching

Setting up a remote repository / pushing local project and cloning the remote repo

Fork vs Clone, Origin vs Upstream

Git/GitHub Terminologies

Git/GitHub via SourceTree I : Commit & Push

Git/GitHub via SourceTree II : Branching & Merging

Git/GitHub via SourceTree III : Git Work Flow

Git/GitHub via SourceTree IV : Git Reset

Git Cheat sheet - quick command reference






Subversion

Subversion Install On Ubuntu 14.04

Subversion creating and accessing I

Subversion creating and accessing II








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master