BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

DevOps / Sys Admin Q & A #3 : Linux Systems





Bookmark and Share





bogotobogo.com site search:







What are the tasks performed by kernel?

  1. Process scheduling - linux is a preemptive multitasking operating system. Preemptive means that the rules governing which processes receive use of the CPU and for how long are determined by the kernel process scheduler.

  2. Creation and termination of processes.

  3. Memory management - linux employs virtual memory management which has two main advantages. Processes are isolated from one another and from the kernel, so that one process can't read or modify the memory of another process or the kernel.

  4. The kernel provides a file system.

  5. Access to devices such as mice, monitors, keyboards, disk and tape drives, and so on).

  6. Provision of a system call application programming interface (API).

  7. Networking




What is runlevel?

Runlevel is a mode of operation in OS, and a runlevel represents the different system state of a Linux system.

When the Linux system boots, the kernel is initialized, and then enters one (and only one) runlevel. When a service starts, it will try to start all the services that are associated with that runlevel.

In general, when a computer enters runlevel 0, the system shuts down all running processes, unmounts all file systems, and powers off.

When it enters runlevel 6, it reboots.

The intermediate runlevels (1-5) differ in terms of which drives are mounted, and which network services are started. Default runlevels are typically 3, 4, or 5.

Runlevel 1 is reserved for single-user mode-a state where only a single user can log in to the system.

'Single user' mode runlevel 1 (or 'S') is sometimes called a 'rescue', or a 'trouble-shooting' mode.

Generally, few processes are started in single-user mode, so it is a very useful runlevel for diagnostics when a system won't fully boot. Even in the default GRUB menu we will notice a recovery mode option that boots us into runlevel 1.

In other words, runlevels define what tasks can be accomplished in the current state (or runlevel) of a Linux system. Every Linux system supports three basic runlevels, plus one or more runlevels for a normal operation.

Lower run levels are useful for maintenance or emergency repairs, since they usually don't offer any network services at all.



We can check current runlevel simply by issuing runlevel command:

$ runlevel
N 2

It shows both the previous runlevel and the current one. If the first output character is 'N', the runlevel has not been changed since the system was booted. We can set the runlevel without reboot using telinit:

$ sudo telinit 5
 
$ runlevel
2 5

If we use telinit 3 from runlevel=5, we'll immediately lose GUI and get a shell login prompt!

Screenshot-runlevel.png

For systemd, the concept of runlevels is replaced by the term "targets":

run-level-target.png


To check default runlevel on CentOS 7 / RHEL 7 and switch it to other target:

$ systemctl get-default
graphical.target

$ sudo systemctl set-default multi-user.target
Removed /etc/systemd/system/default.target.
Created symlink /etc/systemd/system/default.target, pointing to /usr/lib/systemd/system/multi-user.target.

$ systemctl get-default
multi-user.target


Under the folder /etc/init.d/, we find all the init scripts for different boot up services, like apache2, networking, etc.

Depending on which runlevel the computer starts in, different services are started. For example, let's look int /etc/init.d/nginx, we can see the when the nginx server runs (runlevel 2 3 4 5):

### BEGIN INIT INFO
# Provides:       nginx
# Required-Start:    $local_fs $remote_fs $network $syslog $named
# Required-Stop:     $local_fs $remote_fs $network $syslog $named
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the nginx web server
# Description:       starts nginx using start-stop-daemon
### END INIT INFO

The run levels are defined by files in the file system. All the run level files are found in the /etc directory according to the following table:

/etc/rc0.d              Run level 0
/etc/rc1.d              Run level 1
/etc/rc2.d              Run level 2
/etc/rc3.d              Run level 3
/etc/rc4.d              Run level 4
/etc/rc5.d              Run level 5
/etc/rc6.d              Run level 6

Let's look into some of the directories rc2.d and rc6.d in Ubuntu 14:

rc2.d:

$ ls /etc/rc2.d
K08vmware-USBArbitrator  S20nagios-nrpe-server  S50saned
K09apache2               S20puppetmaster        S50vmware-USBArbitrator
K80nginx                 S20puppetqd            S55vmware-workstation-server
README                   S20rabbitmq-server     S70dns-clean
S19postgresql            S20redis_6379          S70pppd-dns
S19vmware                S20redis-server        S92tomcat7
S20apache-htcacheclean   S20rsync               S95elasticsearch
S20fcgiwrap              S20speech-dispatcher   S95kibana
S20jenkins               S20sysstat             S99chef-client
S20jetty8                S20virtualbox          S99grub-common
S20kerneloops            S23ntp                 S99monit
S20memcached             S25vmamqpd             S99ondemand
S20nagios                S50cassandra           S99rc.local

rc6.d:

K01monit                      K20jenkins             K20rsync
K02chef-client                K20jetty8              K20speech-dispatcher
K06vmamqpd                    K20kerneloops          K20virtualbox
K06vmware-workstation-server  K20memcached           K21postgresql
K08tomcat7                    K20nagios              K50cassandra
K08vmware                     K20nagios-nrpe-server  README
K09apache2                    K20nginx               S20sendsigs
K10elasticsearch              K20puppetmaster        S30urandom
K10kibana                     K20puppetqd            S31umountnfs.sh
K10unattended-upgrades        K20rabbitmq-server     S40umountfs
K20apache-htcacheclean        K20redis_6379          S60umountroot
K20fcgiwrap                   K20redis-server        S90reboot

Here the first character 'S' means (S)tart files, 'K' meands (K)ill files. In other words, they indicate Enabling (S) or disabling(k) run level services.

Each of the script will be executed in alphabetical order.

Note that at runlevel 6, most of the scripts start with 'K' while runlevel 2 executes couple of Kill scripts and lots of Starts.

Let's look at some of the programs that are actually executed during a run level change:

S20rsync -> ../init.d/rsync

Note that all of the scripts that run during system start-up actually reside in the /etc/init.d/ directory.


Current level can be listed by typing the command 'who -r':

$ who -r
         run-level 5  2019-11-01 05:53




What is init?

init is the first process that starts in a Linux system after the machine boots and the kernel loads into memory.

It decides how a user process or a system service should load, in what order, and whether it should start automatically.

Every process in Linux has a process ID (PID) and init has a PID of 1. It's the parent of all other processes that subsequently spawn as the system comes online.






/etc/inittab file

inittab file describes how the INIT process should set up the system in a certain run-level.

As an example, the default run state is 3 as shown below:

id:3:initdefault:

The /etc/rc.d/rc script will then use the specified run level to determine which set of run scripts to execute. Under normal conditions, then, rc will run with "3" as its argument and will run all the scripts in the /etc/rc3.d directory. It will run the kill scripts (those that start with an uppercase "K") first and then the start scripts (those that start with an uppercase "S") using lines like this:

for i in /etc/rc$runlevel.d/K* ; do




How to auto start after a crash or reboot

Our running Linux system has a number of background processes executing at any time. These processes (services or daemons) may be native to the operating system (such as sshd), or run as part of an application (such as httpd/apache2).

We want our Linux services to run continuously without failing and start automatically if the system reboots or crashes.

A reboot can happen for many reasons: it can be a planned restart, the last step of a patch update, or the result of unexpected system behavior. A crash is what happens when the process stopping unexpectedly or becomes unresponsive to user or application requests.

Most of our standard applications that we can install, such as Nginx or MySQL, will start after reboot by default, but NOT start after a crash by default. They will come with their own init scripts in /etc/init.d already.

To make sure our service is set to automatically start, we need to make sure the service has a functional Bash init script located at /etc/init.d/service.

To enable the service, we may want to use the update-rc.d command (or for a CentOS system, chkconfig):

$ sudo update-rc.d service enable




chkconfig

On ubuntu, the chkconfig is no longer available for 12.10+ while for Redhat it is still available.

So, /usr/sbin/sysv-rc-conf is an alternate option for Ubuntu.

We use the 'chkconfig (or sysv-rc-conf)' command to find out runlevel of applications which are running:

$ sudo sysv-rc-conf --list
acpid       
anacron     
apache2      0:off	1:off	2:off	3:off	4:off	5:off	6:off
apparmor     S:on
apport      
avahi-daemon
binfmt-suppo
bluetooth   
brltty       S:on
chef-client  0:off	1:off	2:on	3:on	4:on	5:on	6:off

We can see, the runlevel of apache2 is 2-5. To check only the runlevel of apache2, we can use:

$ sysv-rc-conf --list apache2
apache2      0:off	1:off	2:off	3:off	4:off	5:off	6:off

To configure apache2 to start on boot:

$ sudo sysv-rc-conf apache2 on

$ sysv-rc-conf --list apache2
apache2      0:off	1:off	2:on	3:on	4:on	5:on	6:off

The equivalent chkconfig command:

$ sudo chkconfig apache2 enable

$ chkconfig --list apache2

If the command is not available, we can install it:

$ sudo apt-get install sysv-rc-conf

To find the services with runlevel with 1:on:

$ sysv-rc-conf --list |grep "1:on"
dns-clean    1:on	2:on	3:on	4:on	5:on
killprocs    1:on
pppd-dns     1:on	2:on	3:on	4:on	5:on
single       1:on

We can set runlevel via UI by just issuing a command, sysv-rc-conf:

Screenshot-runlevel.png



How to change time zone?

We want to switch current timezone (UTC) to Pacific Time. We can do it like this:

$ date
Fri Sep 18 03:03:28 UTC 2015

$ sudo rm /etc/localtime
$ sudo ln -s /usr/share/zoneinfo/US/Pacific /etc/localtime
$ date
Thu Sep 17 20:04:35 PDT 2015

ln -sf will overwrite the existing one:

$ sudo ln -sf /usr/share/zoneinfo/US/Pacific /etc/localtime

Another way of setting timezones:

$ sudo timedatectl set-timezone UTC

Or the 3rd way (interactive mode):

$ sudo dpkg-reconfigure tzdata

To list all the timezones:

$ timedatectl list-timezones



IPTables (Linux Firewall)

Firewall decides fate of packets incoming and outgoing in system. IPTables is a rule based firewall and it is pre-installed on most of Linux operating system. By default, it runs without any rules. So, it allows all traffic by default. IPTables is a front-end tool to talk to the kernel and decides the packets to filter.

To list tables, we can use iptables -L:

$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 2532K packets, 360M bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  lxcbr0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:53
    0     0 ACCEPT     udp  --  lxcbr0 *       0.0.0.0/0            0.0.0.0/0            udp dpt:53
    0     0 ACCEPT     tcp  --  lxcbr0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:67
    0     0 ACCEPT     udp  --  lxcbr0 *       0.0.0.0/0            0.0.0.0/0            udp dpt:67

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
...      

Chain OUTPUT (policy ACCEPT 2429K packets, 227M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination 

Note that we used the following options "-L" (List ruleset), "-v" (Verbose) and "-n" (Displays in numeric format).

We have three chains:

  1. INPUT : Default chain originating to system.
  2. OUTPUT : Default chain generating from system.
  3. FORWARD : Default chain packets are send through another interface.

Another example: for a NodeJS application, we may want to redirect port 80 to 3000:

$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000

Note we used -t before the nat table.

We may want to see the fire wall:

$ sudo iptables -nvL
Chain INPUT (policy ACCEPT 391 packets, 30731 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 291 packets, 51291 bytes)
 pkts bytes target     prot opt in     out     source               destination     

We don't see any firewall settings. That's because the command iptables -nvL is displaying the contents of the filter table. The rule we added was in the nat table. So, we need to add -t nat to look at the nat table:

$ sudo iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 649 packets, 38562 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   30  1760 REDIRECT   tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 redir ports 3000
    6   360 REDIRECT   tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3000 redir ports 3000

Chain INPUT (policy ACCEPT 676 packets, 40162 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 169 packets, 12849 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 169 packets, 12849 bytes)
 pkts bytes target     prot opt in     out     source               destination     

In the table, I put unnecessary 2nd row which redirects port 3000 to port 3000. How we can delete it?

Let's add the line number in the output:

$ sudo iptables -t nat -L PREROUTING --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:http redir ports 3000
2    REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:3000 redir ports 3000

We use D to delete the 2nd:

$ sudo iptables -t nat -D PREROUTING 2
$ sudo iptables -t nat -L PREROUTING --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    REDIRECT   tcp  --  anywhere             anywhere             tcp dpt:http redir ports 3000




Check current memory usage

If we need to see how much memory our system is using at the current moment issue the following command:

$ free -m

This command will generate output that looks like the following:

             total       used       free     shared    buffers     cached
Mem:          3545       3346        199         57         20        358
-/+ buffers/cache:       2967        578
Swap:         3681       1338       2343

where

total = used + free + buff/cache   

Out of a total 3545 megabytes of memory (RAM), the system is using 3346 megabytes, and has 199 megabytes free. However, the system also has 2967 megabytes of "stale" data buffered and stored in cache. The operating system will "drop" the caches when and if it needs the space, but retains the cache if there is no other need for the space. It is totally normal for a Linux system to leave old data in RAM until the space is needed, and we should not be alarmed if only a small amount of memory is actually "free."





Monitor IO Usage with vmstat (Virtual Memory Statistics)

The vmstat tool provides information about memory, swap utilization, IO wait, and system activity. It is a built-in Linux system monitoring tool and its primary job is measuring a system's usage of virtual memory. It is particularly useful for diagnosing I/O-related issues.

A Linux system can run out of RAM for several reasons, such as demands from its running applications. When this happens, the Linux kernel swaps or pages out programs to the computer's storage devices, called swap space. which is a reserved area of hard drive or solid-state drive storage. It’s used as virtual memory when RAM is unavoidable. As RAM is freed up, the swapped-out data or code is swapped back into the main RAM-based memory.

When swapping happens, system performance drops drastically because the server's swap I/O speed is much slower than RAM even if SSD is used for virtual memory. In addition, when Linux uses virtual memory it spends more of its CPU cycles on managing virtual memory swapping.

Since virtual memory has a big impact on system performance, vmstat is essential for monitoring it. In addition to monitoring virtual memory paging, vmstat also measures processes, I/O, CPU, and disk scheduling.

vmstat [options] [delay] [count]    

  1. options: vmstat command settings.
  2. delay: the time interval between updates. If no delay is specified, the report runs as an average since the last reboot.
  3. count: the number of updates printed after the given delay interval. If no count is set, the default is an infinite number of updates every x seconds (where x = delay).

So, the following command runs a vmstat every second(1), twenty times(20). This gives a pretty good sample of the current state of the system.

The output generated should look like the following:

$ vmstat 1 20
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 1664608 181656 395496    0    0     1     1   67   61  0  0 99  0  0
 0  0      0 1664608 181656 395496    0    0     0     0  197  420  1  1 99  0  0
 0  0      0 1664608 181656 395496    0    0     0     0  203  406  0  1 99  0  0
 0  0      0 1664608 181656 395496    0    0     0     0  173  291  0  0 99  0  0
 1  0      0 1664608 181656 395496    0    0     0     0  263  504  0  1 99  0  0
...

The memory and swap columns provide the same kind of information provided by the free -m command:

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           2492         303        1625           0         563        2061
Swap:          1023           0        1023 

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           2.4G        302M        1.6G        844K        563M        2.0G
Swap:          1.0G          0B        1.0G

The most salient information produced by this command is the wa column, which is the final column in most implementations. This field displays the amount of time the CPU spends waiting for IO operations to complete.

If this number is consistently and considerably higher than 0, we might consider taking measures to address the IO usage.

Here are details of the vmstat output:

  1. procs:
    The procs data reports the number of processing jobs waiting to run and allows us to determine if there are processes "blocking" our system from running smoothly. The r column displays the total number of processes waiting for access to the processor.
    The b column displays the total number of processes in a "sleep" state.
    These values are often 0.

  2. memory:
    The information displayed in the memory section provides the same data about memory usage as the command free -m.
    The swapd or “swapped” column reports how much memory has been swapped out to a swap file or disk.
    The free column reports the amount of unallocated memory.
    The buff or "buffers" column reports the amount of allocated memory in use. The cache column reports the amount of allocated memory that could be swapped to disk or unallocated if the resources are needed for another task.

  3. swap:
    The swap section reports the rate that memory is sent to or retrieved from the swap system.
    By reporting “swapping” separately from total disk activity, vmstat allows us to determine how much disk activity is related to the swap system.
    swap-out-swap-in.png The si column reports the amount of memory that is moved from swap to "real" memory per second.
    The so column reports the amount of memory that is moved to swap from "real" memory per second.

  4. i/o:
    The io section reports the amount of input and output activity per second in terms of blocks read and blocks written.
    The bi column reports the number of blocks received, or "blocks in", from a disk per second.
    The bo column reports the number of blocks sent, or "blocks out", to a disk per second.

  5. system:
    The system section reports data that reflects the number of system operations per second.
    The in column reports the number of system interrupts per second, including interrupts from system clock.
    The cs column reports the number of context switches that the system makes in order to process all tasks.

  6. cpu:
    The cpu section reports on the use of the system's CPU resources.
    The columns in this section always add to 100 and reflect "percentage of available time".
    The us column reports the amount of time that the processor spends on user space tasks, or all non-kernel processes.
    The sy column reports the amount of time that the processor spends on kernel related tasks.
    The id column reports the amount of time that the processor spends idle.
    The wa column reports the amount of time that the processor spends waiting for IO operations to complete before being able to continue processing tasks.
    The st column reports the amount of time taht a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor. Basically, the steal time (stolen cpu) cycle counts the amount of time that our virtual machine is ready to run but could not run due to other virtual machines competing for the CPU.
    st should approach zero. Anything above zero means there is some performance degradation. For example, assume we have a machine with 16 physical CPU cores running 10 VMs, and each one has been allocated two virtual CPUs. This means 20 virtual CPUs are competing for 16 physical CPUs -- creating a prime environment for stolen CPU.




Process State Codes

Here are the states of a process:

State Description
D uninterruptible sleep (usually IO)
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped, either by a job control signal or because it is being traced
Z defunct ("zombie") process, terminated but not reaped by its parent




htop - Monitor Processes, Memory, and CPU Usage

If we want a more organized and real-time view of the current state of our system, we may want to use a tool called htop. Note that this is not installed by default on most systems.


htop.png



cron syntax
* * * * *

In order, the asterisks represent:

1. Minute, 2. Hour 3. Day of month, 4. Month, 5. Day of week




swappiness

The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. Because disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory.

swappiness can have a value of between 0 and 100:

  1. swappiness=0 : Version 3.5 and over: disables swapiness. Prior to 3.5: tells the kernel to avoid swapping processes out of physical memory for as long as possible.
  2. swappiness=1 : Version 3.5 and over: Minimum swappiness without disabling it entirely.
  3. swappiness=100 tells the kernel to aggressively swap processes out of physical memory and move them to swap cache.

Ref: How do I configure swappiness?

We can check its value:

$ cat /proc/sys/vm/swappiness
60

We can set it:

$ sudo sysctl vm.swappiness=10
vm.swappiness = 10

$ cat /proc/sys/vm/swappiness
10




Linux disk utilization : iostat

To monitor disk read/write rates of individual disks, we can use iostat. This tool allows us to monitor I/O statistics for each device or partition. Using iostat command, we can find out disk utilization and monitor system input/output device loading by observing the time the physical disks are active in relation to their average transfer rates.

To use this tool, we need to run sysstat package.

To install sysstat on Ubuntu or Debian:

$ sudo apt-get install sysstat

Syntax for disk utilization report looks like this:

iostat -d -x interval count

where:

  1. -d : Display the device utilization report (d == disk)
  2. -x : Display extended statistics including disk utilization
  3. interval : It is time period in seconds between two samples. iostat 2 will give data at each 2 seconds interval.
  4. count : It is the number of times the data is needed. iostat 2 5 will give data at 2 seconds interval 5 times.
$ iostat -d -x 5 3
Linux 3.13.0-40-generic (laptop) 	10/14/2015 	_x86_64_	(2 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               1.75     4.78    6.15    2.13   104.99    45.86    36.45     0.27   32.58   22.74   61.06   3.03   2.51

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     5.20    0.00    7.80     0.00    80.00    20.51     0.14   17.74    0.00   17.74  12.41   9.68

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               4.20     4.40    0.80    2.80    20.00    47.20    37.33     0.11   31.11   76.00   18.29  31.11  11.20

The following values from the iostat output are the major ones:

  1. r/s : The number of read requests per second. See if a hard disk reports consistently high reads
  2. w/s : The number of write requests per second. See if a hard disk reports consistently high writes
  3. svctm : The average service time (in milliseconds) for I/O requests that were issued to the device.
  4. %util : Percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%.




Linux disk utilization : iotop -o

We need to install iotop:

$ sudo apt-get install iotop

Running iotop without any argument displays a list of all existing processes regardless of their disk I/O activities. If we want iotop to only show processes that are actually doing disk I/O, run the following instead:

$ sudo iotop -o
iotop.png



What is SELinux?

Security-Enhanced Linux (SELinux) is a mandatory access control (MAC) security mechanism implemented in the kernel. SELinux was first introduced in CentOS 4 (SELinux).

SELinux is a Mandatory Access Control (MAC) system which is a kernel enhancement to confine programs to a limited set of resources (SELinux).

Traditional access control methods such as file permissions or access control lists (ACLs) are used to control the file access of users. Users and programs alike are allowed to grant insecure file permissions to others or, conversely, to gain access to parts of the system that should not otherwise be necessary for normal operation. For example, keys in ~/.ssh/.





Special permission I - What is a sticky bit?

The restricted deletion flag or sticky bit is a permission bit that is set on a file or a directory that allows only the file's owner, directory's owner or the root user to delete or rename the file.

Without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of the file's owner. Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files.

What is the "t" letter in the output of "ls -ld /tmp"?.

$ ls -ld /tmp
drwxrwxrwt 15 root root 24576 Oct 16 17:12 /tmp

We can remove it:

$ sudo chmod -t /tmp

$ ls -ld /tmp
drwxrwxrwx 15 root root 24576 Oct 16 17:12 /tmp

Add back in:

$ sudo chmod +t /tmp

$ ls -ld /tmp
drwxrwxrwt 15 root root 24576 Oct 16 17:12 /tmp




Special permission II - SUID/SGID

In the previous section, we discussed the sticky bit but there are other special permissions apart from the normal file permissions rwx which we set with chmod and chown commands.

Set-user Identification (SUID) and Set-group identification (SGID) are for granular file/folder management by Linux administrator.

Let's take a look at the /usr/bin/passwd command. This command, by default, has the SUID permission set:

$ ls -l /usr/bin/passwd
-rwsr-xr-x 1 root root 59640 Mar 22  2019 /usr/bin/passwd

On Linux system, stored passwords are protected: only someone with root privileges can access the file that contains the passwords. That might sound ok, however, how do those who don’t have that access change their passwords?

Typically, Linux commands and programs run with the same set of permissions as the person who launches the program. When root runs the passwd command to change a password, it runs with root's permissions. That means the passwd command can freely access the stored passwords in the /etc/shadow file.

What would be ideal is a scheme in which anyone on the system could launch the passwd program, but have the passwd program retain root's elevated privileges. This would enable a user to change his/her own password.

So, the SUID makes programs and commands to run with the permissions of the file owner, rather than the permissions of the person who launches the program.


We usualy run the passwd command as a normal user without 'sudo'. That's fine because we have 'x' permissions for both 'group' and 'others':

So if the file is owned by root and the SUID bit is turned on, the program will run as root. Even if we execute it as a regular user.

So, while executing the passwd as a normal user, we are allowed to modify our password thanks to the fact that SUID bit 's' is turned on.


SGID is the same as SUID, but inherits group privileges of the file on execution, not user privileges. Similar way when we create a file within directory,it will inherit the group ownership of the directories.


Using a numerical method, we need to pass a fourth, preceding digit in our chmod command:

SUID = 4
SGID = 2
Sticky = 1    

Here are examples:

no suid/sgid:

$ ls -l a.sh
-rwxr-xr-x 1 k k 0 Mar 26 14:10 a.sh

suid & user's executable bit enabled (lowercase s) (chmod 4755):

$ chmod u+s a.sh
$ ls -l a.sh
-rwsr-xr-x 1 k k 13 Mar 26 14:10 a.sh

suid enabled & executable bit disabled (uppercase S):

$ chmod u-x a.sh
$ ls -l a.sh
-rwSr-xr-x 1 k k 13 Mar 26 14:10 a.sh

sgid & group's executable bit enabled (lowercase s) (chmod 2755):

$ chmod g+s a.sh
$ ls -l a.sh
-rwxr-sr-x 1 k k 13 Mar 26 14:10 a.sh

sgid enabled & executable bit disabled (uppercase S):

$ chmod g-x a.sh
$ ls -l a.sh
-rwxr-Sr-x 1 k k 13 Mar 26 14:10 a.sh

The followings are a few of the Linux commands that use the SUID bit to give the command elevated privileges when run by a regular user:

$ ls -l /bin/su
-rwsr-xr-x 1 root root 44664 Mar 22  2019 su

$ ls -l /bin/mount
-rwsr-xr-x 1 root root 43088 Sep 16 18:43 mount

$ ls -l /bin/umount
-rwsr-xr-x 1 root root 26696 Sep 16 18:43 umount

$ ls -l /bin/ping
-rwsr-xr-x 1 root root 44168 May  7  2014 ping

$ ls -l /usr/bin/passwd
-rwsr-r-x 1 root root 59640 Mar 22  2019 passwd




lsof (LiSt Open Files)

The most frequent use of lsof command is when a disk cannot be unmounted as it says the files are being used. Using this command we can identify the files which are in use / opened by which process.

In Linux, everything is a files (pipes, sockets, directories, and devices).

$ lsof -i
COMMAND     PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
chrome     4580    k   84u  IPv4 2743073      0t0  TCP laptop:34284->ne1onepush.vip.ne1.yahoo.com:https (ESTABLISHED)

$ lsof -i -n
COMMAND     PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
chrome     4580    k   84u  IPv4 2753410      0t0  TCP 192.168.1.1:34889->98.138.79.73:https (ESTABLISHED)

where:

  1. -i Lists IP sockets.
  2. -n Do not resolve hostnames (no DNS).

To find out all the running process of specific port, just use the following command with option -i:

$ lsof -i :8087
COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node    2208 ubuntu   10u  IPv4  11235      0t0  TCP ip-172-31-10-18.us-west-1.compute.internal:8087 (LISTEN)

For more details, visit lsof





top command & trouble shooting

This section is a compiled work from the following sources:

  1. Top
  2. Understanding Linux CPU stats

The top program provides a dynamic real-time view of a running system. It can display system summary information, as well as a list of processes or threads currently being managed by the kernel.

Descriptions for the top display:

%Cpu(s): 10.7 us,  2.9 sy,  0.0 ni, 85.7 id,  0.5 wa,  0.0 hi,  0.2 si,  0.0 st

This line shows CPU state percentages based on the interval since the last refresh.

  1. us, user user cpu time (or) % CPU time spent in user space, time running un-niced user processes.
    Shells, compilers, databases, web servers, and the programs associated with the desktop are all user space processes. If the processor isn't idle, it is quite normal that the majority of the CPU time should be spent running user space processes.

  2. sy, system system cpu time (or) % CPU time spent in kernel space. This is the amount of time that the CPU spent running the kernel. All the processes and system resources are handled by the Linux kernel. When a user space process needs something from the system, for example when it needs to allocate memory, perform some I/O, or it needs to create a child process, then the kernel is running. In fact the scheduler itself which determines which process runs next is part of the kernel. The amount of time spent in the kernel should be as low as possible. In this case, just 2.9% of the time given to the different processes was spent in the kernel. This number can peak much higher, especially when there is a lot of I/O happening.

  3. ni, nice time running niced user processes.
    Niceness is a way to tweak the priority level of a process so that it runs less frequently. A niceness of -20 is the highest priority and 19 is the lowest priority. So, -20 is the most favorable scheduling and 19 is the least favorable scheduling. A higher-priority process will get a larger chunk of the CPU time than a lower-priority process. By default processes on Linux are started with a niceness of 0.
    A "niced" process is one with a positive nice value. So if the processor's nice value is high, that means it is working with some low priority processes. So this indicator is useful when we see high CPU utilization and we are afraid that this high load will have bad effect on our system:
    1. High CPU utilization with high nice value: Nothing to worry, not so important tasks doing there job, important processes will easily get CPU time if they need. This situation is not a real bottleneck.
    2. High CPU utilization with low nice value: Something to worry because the CPU is stressed with important processes so these or new processes will have to wait. This situation is a real bottleneck.

  4. id, idle time spent in the kernel idle handler.
    The id statistic tell us that the processor was idle just over 85.7% of the time during the last sampling period. The total of the user space percentage - us, the niced percentage - ni, and the idle percentage - id, should be close to 100%. Which it is in this case. If the CPU is spending a more time in the other states then something is probably wrong, and may need trouble shooting.

  5. wa, IO-wait time waiting for I/O completion.
    I/O operations are slow compared to the speed of a CPU. There are times when the processor has initiated a read or write operation and then it has to wait for the result, but has nothing else to do. In other words it is idle while waiting for an I/O operation to complete. The time the CPU spends in this state is shown by the 'wa' statistic.
    'wa' is the measure of time over a given period that a CPU spent idle because all runnable tasks were waiting for a IO operation to be fulfilled.

  6. hi time spent servicing hardware interrupts.
    This is the time spent processing hardware interrupts. Hardware interrupts are generated by hardware devices (network cards, keyboard controller, external timer, hardware senors, etc.) when they need to signal something to the CPU (data has arrived for example). Since these can happen very frequently, and since they essentially block the current CPU while they are running, kernel hardware interrupt handlers are written to be as fast and simple as possible.
    On a system where no processes have been niced then the number will be 0.
    Hardware interrupts are physical interrupts sent to the CPU from various peripherals like disks and network interfaces. Software interrupts come from processes running on the system. A hardware interrupt will actually cause the CPU to stop what it is doing and go handle the interrupt. A software interrupt doesn't occur at the CPU level, but rather at the kernel level.

  7. si time spent servicing software interrupts.
    This represents the time spent in softirqs.

  8. st time stolen from this vm by the hypervisor.
    This represents "steal time", and it is only relevant in virtualized environments. It represents time when the real CPU was not available to the current virtual machine - it was "stolen" from that VM by the hypervisor (either to run another VM, or for its own needs).
    This number tells how long the virtual CPU has spent waiting for the hypervisor to service another virtual CPU running on a different virtual machine. Since in the real-world these virtual processors are sharing the same physical processor(s) then there will be times when the virtual machine wanted to run but the hypervisor scheduled another virtual machine instead.

Here are some of the trouble shootings:

  1. High user mode CPU usage - If a system suddenly jumps from having spare CPU cycles to running flat out high, then the first thing to check is the amount of time the CPU spends running user space processes. If this is high, then it probably means that a process has gone crazy and is eating up all the CPU time.
    Using the top command we will be able to see which process is to blame and restart the service or kill the process.

  2. High kernel CPU usage - Sometimes this is acceptable. For example, a program that does lots of console I/O can cause the kernel usage to spike. However if it remains higher for long periods of time, then it could be an indication that something isn't right.
    A possible cause of such spikes could be a problem with a driver/kernel module.

  3. High niced value CPU usage - If the amount of time the CPU is spending running processes with a niced priority value jumps, then it means that someone has started some intensive CPU jobs on the system, but they have niced the task:
    1. If the niceness level is greater than zero, then the user has been courteous enough lower to the priority of the process and therefore avoid a CPU overload. There is probably little that needs to be done in this case, other than maybe find out who has started the process.
    2. But if the niceness level is less than 0, then we will need to investigate what is happening and who is responsible, as such a task could easily cripple the responsiveness of the system.

  4. High waiting on I/O This means that there are some intensive I/O tasks running on the system that don't use up much CPU time. If this number is high for anything other than short bursts, then it means that either the I/O performed by the task is very inefficient, or the data is being transferred to a very slow device, or there is a potential problem with a hard disk that is taking a long time to process reads & writes.

  5. High interrupt processing This could be an indication of a broken peripheral that is causing lots of hardware interrupts or of a process that is issuing lots of software interrupts.

  6. Large stolen time Basically, this means that the host system running the hypervisor is too busy. If possible, check the other virtual machines running on the hypervisor, and/or migrate to our virtual machine to another host.




dmidecode - retrieve hardware information

dmidecode command reads the system DMI table to display hardware and BIOS information of a linux box:

$ sudo dmidecode
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.
11 structures occupying 359 bytes.
Table at 0x000EB01F.

Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
...
System Information
...
Chassis Information
...
Processor Information
...
Physical Memory Array
...
System Boot Information
	Status: No errors detected

Handle 0x7F00, DMI type 127, 4 bytes
End Of Table

where the SMBIOS stands for System Management BIOS.





Who is logged on and what they are doing - w

w displays information about the users currently on the machine, and their processes. The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes:

$ w
 16:38:10 up 487 days, 10:45,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
ubuntu   pts/0    73.70.219.237    16:29    0.00s  0.04s  0.00s w

The output shows the following entries for each user: login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the command line of their current process.

    The first line provides the same information as the uptime command. It contains the following:

  1. 16:38:10 - The current system time.
  2. up 487 days - The length of time the system has been up.
  3. 1 user - The number of logged-in users.
  4. load average: 0.00, 0.00, 0.00 - The system load averages for the past 1, 5, and 15 minutes.

    The second line includes the following:

  1. USER – The name of the logged user.
  2. TTY – The name of the terminal used by the user.
  3. FROM – The host name or IP address from where the user is logged in.
  4. LOGIN@ – The time when the user logged in.
  5. IDLE – The time since the user last interacted with the terminal. Idle time.
  6. JCPU – The time used by all processes attached to the tty.
  7. PCPU – The time used by the user’s current process. The one displayed in the WHAT field.
  8. WHAT – The user’s current process and options/arguments.



DevOps

  • Phases of Continuous Integration
  • Software development methodology
  • Introduction to DevOps
  • Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases
  • Artifact repository and repository management
  • Linux - General, shell programming, processes & signals ...
  • RabbitMQ...
  • MariaDB
  • New Relic APM with NodeJS : simple agent setup on AWS instance
  • Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)
  • Nagios - The industry standard in IT infrastructure monitoring on Ubuntu
  • Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs
  • Datadog - Monitoring with PagerDuty/HipChat and APM
  • Install and Configure Mesos Cluster
  • Cassandra on a Single-Node Cluster
  • OpenStack install on Ubuntu 16.04 server - DevStack
  • AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry
  • CI/CD with CircleCI - Heroku deploy
  • Introduction to Terraform with AWS elb & nginx
  • Kubernetes I - Running Kubernetes Locally via Minikube
  • Kubernetes II - kops on AWS
  • Kubernetes III - kubeadm on AWS
  • CI/CD Github actions
  • CI/CD Gitlab



  • DevOps / Sys Admin Q & A

  • (1A) - Linux Commands
  • (1B) - Linux Commands
  • (2) - Networks
  • (2B) - Networks
  • (3) - Linux Systems
  • (4) - Scripting (Ruby/Shell)
  • (5) - Configuration Management
  • (6) - AWS VPC setup (public/private subnets with NAT)
  • (6B) - AWS VPC Peering
  • (7) - Web server
  • (8) - Database
  • (9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools
  • (10) - Trouble Shooting: Load, Throughput, Response time and Leaks
  • (11) - SSH key pairs & SSL Certificate
  • (12) - Why is the database slow?
  • (13) - Is my web site down?
  • (14) - Is my server down?
  • (15) - Why is the server sluggish?
  • (16A) - Serving multiple domains using Virtual Hosts - Apache
  • (16B) - Serving multiple domains using server block - Nginx
  • (16C) - Reverse proxy servers and load balancers - Nginx
  • (17) - Linux startup process
  • (19) - phpMyAdmin with Nginx virtual host as a subdomain
  • (19) - How to SSH login without password?
  • (20) - Log Rotation
  • (21) - Monitoring Metrics
  • (22) - lsof
  • (23) - Wireshark introduction
  • (24) - User account management
  • (25) - Domain Name System (DNS)
  • (26) - NGINX SSL/TLS, Caching, and Session
  • (27) - Troubleshooting 5xx server errors
  • (28) - Linux Systemd: journalctl
  • (29) - Linux Systemd: FirewallD
  • (30) - Linux: SELinux
  • (31) - Linux: Samba
  • (0) - Linux Sys Admin's Day to Day tasks


  • Linux - system, cmds & shell

    1. Linux Tips - links, vmstats, rsync
    2. Linux Tips 2 - ctrl a, curl r, tail -f, umask
    3. Linux - bash I
    4. Linux - bash II
    5. Linux - Uncompressing 7z file
    6. Linux - sed I (substitution: sed 's///', sed -i)
    7. Linux - sed II (file spacing, numbering, text conversion and substitution)
    8. Linux - sed III (selective printing of certain lines, selective definition of certain lines)
    9. Linux - 7 File types : Regular, Directory, Block file, Character device file, Pipe file, Symbolic link file, and Socket file
    10. Linux shell programming - introduction
    11. Linux shell programming - variables and functions (readonly, unset, and functions)
    12. Linux shell programming - special shell variables
    13. Linux shell programming : arrays - three different ways of declaring arrays & looping with $*/$@
    14. Linux shell programming : operations on array
    15. Linux shell programming : variables & commands substitution
    16. Linux shell programming : metacharacters & quotes
    17. Linux shell programming : input/output redirection & here document
    18. Linux shell programming : loop control - for, while, break, and break n
    19. Linux shell programming : string
    20. Linux shell programming : for-loop
    21. Linux shell programming : if/elif/else/fi
    22. Linux shell programming : Test
    23. Managing User Account - useradd, usermod, and userdel
    24. Linux Secure Shell (SSH) I : key generation, private key and public key
    25. Linux Secure Shell (SSH) II : ssh-agent & scp
    26. Linux Secure Shell (SSH) III : SSH Tunnel as Proxy - Dynamic Port Forwarding (SOCKS Proxy)
    27. Linux Secure Shell (SSH) IV : Local port forwarding (outgoing ssh tunnel)
    28. Linux Secure Shell (SSH) V : Reverse SSH Tunnel (remote port forwarding / incoming ssh tunnel) /)
    29. Linux Processes and Signals
    30. Linux Drivers 1
    31. tcpdump
    32. Linux Debugging using gdb
    33. Embedded Systems Programming I - Introduction
    34. Embedded Systems Programming II - gcc ARM Toolchain and Simple Code on Ubuntu/Fedora
    35. LXC (Linux Container) Install and Run
    36. Linux IPTables
    37. Hadoop - 1. Setting up on Ubuntu for Single-Node Cluster
    38. Hadoop - 2. Runing on Ubuntu for Single-Node Cluster
    39. ownCloud 7 install
    40. Ubuntu 14.04 guest on Mac OSX host using VirtualBox I
    41. Ubuntu 14.04 guest on Mac OSX host using VirtualBox II
    42. Windows 8 guest on Mac OSX host using VirtualBox I
    43. Ubuntu Package Management System (apt-get vs dpkg)
    44. RPM Packaging
    45. How to Make a Self-Signed SSL Certificate
    46. Linux Q & A
    47. DevOps / Sys Admin questions



    Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

    YouTubeMy YouTube channel

    Sponsor Open Source development activities and free contents for everyone.

    Thank you.

    - K Hong





    DevOps



    Phases of Continuous Integration

    Software development methodology

    Introduction to DevOps

    Samples of Continuous Integration (CI) / Continuous Delivery (CD) - Use cases

    Artifact repository and repository management

    Linux - General, shell programming, processes & signals ...

    RabbitMQ...

    MariaDB

    New Relic APM with NodeJS : simple agent setup on AWS instance

    Nagios on CentOS 7 with Nagios Remote Plugin Executor (NRPE)

    Nagios - The industry standard in IT infrastructure monitoring on Ubuntu

    Zabbix 3 install on Ubuntu 14.04 & adding hosts / items / graphs

    Datadog - Monitoring with PagerDuty/HipChat and APM

    Install and Configure Mesos Cluster

    Cassandra on a Single-Node Cluster

    Container Orchestration : Docker Swarm vs Kubernetes vs Apache Mesos

    OpenStack install on Ubuntu 16.04 server - DevStack

    AWS EC2 Container Service (ECS) & EC2 Container Registry (ECR) | Docker Registry

    CI/CD with CircleCI - Heroku deploy

    Introduction to Terraform with AWS elb & nginx

    Docker & Kubernetes

    Kubernetes I - Running Kubernetes Locally via Minikube

    Kubernetes II - kops on AWS

    Kubernetes III - kubeadm on AWS

    AWS : EKS (Elastic Container Service for Kubernetes)

    CI/CD Github actions

    CI/CD Gitlab



    DevOps / Sys Admin Q & A



    (1A) - Linux Commands

    (1B) - Linux Commands

    (2) - Networks

    (2B) - Networks

    (3) - Linux Systems

    (4) - Scripting (Ruby/Shell)

    (5) - Configuration Management

    (6) - AWS VPC setup (public/private subnets with NAT)

    (6B) - AWS VPC Peering

    (7) - Web server

    (8) - Database

    (9) - Linux System / Application Monitoring, Performance Tuning, Profiling Methods & Tools

    (10) - Trouble Shooting: Load, Throughput, Response time and Leaks

    (11) - SSH key pairs, SSL Certificate, and SSL Handshake

    (12) - Why is the database slow?

    (13) - Is my web site down?

    (14) - Is my server down?

    (15) - Why is the server sluggish?

    (16A) - Serving multiple domains using Virtual Hosts - Apache

    (16B) - Serving multiple domains using server block - Nginx

    (16C) - Reverse proxy servers and load balancers - Nginx

    (17) - Linux startup process

    (18) - phpMyAdmin with Nginx virtual host as a subdomain

    (19) - How to SSH login without password?

    (20) - Log Rotation

    (21) - Monitoring Metrics

    (22) - lsof

    (23) - Wireshark introduction

    (24) - User account management

    (25) - Domain Name System (DNS)

    (26) - NGINX SSL/TLS, Caching, and Session

    (27) - Troubleshooting 5xx server errors

    (28) - Linux Systemd: journalctl

    (29) - Linux Systemd: FirewallD

    (30) - Linux: SELinux

    (31) - Linux: Samba

    (0) - Linux Sys Admin's Day to Day tasks



    Sponsor Open Source development activities and free contents for everyone.

    Thank you.

    - K Hong







    Docker & K8s



    Docker install on Amazon Linux AMI

    Docker install on EC2 Ubuntu 14.04

    Docker container vs Virtual Machine

    Docker install on Ubuntu 14.04

    Docker Hello World Application

    Nginx image - share/copy files, Dockerfile

    Working with Docker images : brief introduction

    Docker image and container via docker commands (search, pull, run, ps, restart, attach, and rm)

    More on docker run command (docker run -it, docker run --rm, etc.)

    Docker Networks - Bridge Driver Network

    Docker Persistent Storage

    File sharing between host and container (docker run -d -p -v)

    Linking containers and volume for datastore

    Dockerfile - Build Docker images automatically I - FROM, MAINTAINER, and build context

    Dockerfile - Build Docker images automatically II - revisiting FROM, MAINTAINER, build context, and caching

    Dockerfile - Build Docker images automatically III - RUN

    Dockerfile - Build Docker images automatically IV - CMD

    Dockerfile - Build Docker images automatically V - WORKDIR, ENV, ADD, and ENTRYPOINT

    Docker - Apache Tomcat

    Docker - NodeJS

    Docker - NodeJS with hostname

    Docker Compose - NodeJS with MongoDB

    Docker - Prometheus and Grafana with Docker-compose

    Docker - StatsD/Graphite/Grafana

    Docker - Deploying a Java EE JBoss/WildFly Application on AWS Elastic Beanstalk Using Docker Containers

    Docker : NodeJS with GCP Kubernetes Engine

    Docker : Jenkins Multibranch Pipeline with Jenkinsfile and Github

    Docker : Jenkins Master and Slave

    Docker - ELK : ElasticSearch, Logstash, and Kibana

    Docker - ELK 7.6 : Elasticsearch on Centos 7 Docker - ELK 7.6 : Filebeat on Centos 7

    Docker - ELK 7.6 : Logstash on Centos 7

    Docker - ELK 7.6 : Kibana on Centos 7 Part 1

    Docker - ELK 7.6 : Kibana on Centos 7 Part 2

    Docker - ELK 7.6 : Elastic Stack with Docker Compose

    Docker - Deploy Elastic Cloud on Kubernetes (ECK) via Elasticsearch operator on minikube

    Docker - Deploy Elastic Stack via Helm on minikube

    Docker Compose - A gentle introduction with WordPress

    Docker Compose - MySQL

    MEAN Stack app on Docker containers : micro services

    Docker Compose - Hashicorp's Vault and Consul Part A (install vault, unsealing, static secrets, and policies)

    Docker Compose - Hashicorp's Vault and Consul Part B (EaaS, dynamic secrets, leases, and revocation)

    Docker Compose - Hashicorp's Vault and Consul Part C (Consul)

    Docker Compose with two containers - Flask REST API service container and an Apache server container

    Docker compose : Nginx reverse proxy with multiple containers

    Docker compose : Nginx reverse proxy with multiple containers

    Docker & Kubernetes : Envoy - Getting started

    Docker & Kubernetes : Envoy - Front Proxy

    Docker & Kubernetes : Ambassador - Envoy API Gateway on Kubernetes

    Docker Packer

    Docker Cheat Sheet

    Docker Q & A

    Kubernetes Q & A - Part I

    Kubernetes Q & A - Part II

    Docker - Run a React app in a docker

    Docker - Run a React app in a docker II (snapshot app with nginx)

    Docker - NodeJS and MySQL app with React in a docker

    Docker - Step by Step NodeJS and MySQL app with React - I

    Installing LAMP via puppet on Docker

    Docker install via Puppet

    Nginx Docker install via Ansible

    Apache Hadoop CDH 5.8 Install with QuickStarts Docker

    Docker - Deploying Flask app to ECS

    Docker Compose - Deploying WordPress to AWS

    Docker - WordPress Deploy to ECS with Docker-Compose (ECS-CLI EC2 type)

    Docker - ECS Fargate

    Docker - AWS ECS service discovery with Flask and Redis

    Docker & Kubernetes: minikube version: v1.31.2, 2023

    Docker & Kubernetes 1 : minikube

    Docker & Kubernetes 2 : minikube Django with Postgres - persistent volume

    Docker & Kubernetes 3 : minikube Django with Redis and Celery

    Docker & Kubernetes 4 : Django with RDS via AWS Kops

    Docker & Kubernetes : Kops on AWS

    Docker & Kubernetes : Ingress controller on AWS with Kops

    Docker & Kubernetes : HashiCorp's Vault and Consul on minikube

    Docker & Kubernetes : HashiCorp's Vault and Consul - Auto-unseal using Transit Secrets Engine

    Docker & Kubernetes : Persistent Volumes & Persistent Volumes Claims - hostPath and annotations

    Docker & Kubernetes : Persistent Volumes - Dynamic volume provisioning

    Docker & Kubernetes : DaemonSet

    Docker & Kubernetes : Secrets

    Docker & Kubernetes : kubectl command

    Docker & Kubernetes : Assign a Kubernetes Pod to a particular node in a Kubernetes cluster

    Docker & Kubernetes : Configure a Pod to Use a ConfigMap

    AWS : EKS (Elastic Container Service for Kubernetes)

    Docker & Kubernetes : Run a React app in a minikube

    Docker & Kubernetes : Minikube install on AWS EC2

    Docker & Kubernetes : Cassandra with a StatefulSet

    Docker & Kubernetes : Terraform and AWS EKS

    Docker & Kubernetes : Pods and Service definitions

    Docker & Kubernetes : Headless service and discovering pods

    Docker & Kubernetes : Service IP and the Service Type

    Docker & Kubernetes : Kubernetes DNS with Pods and Services

    Docker & Kubernetes - Scaling and Updating application

    Docker & Kubernetes : Horizontal pod autoscaler on minikubes

    Docker & Kubernetes : NodePort vs LoadBalancer vs Ingress

    Docker & Kubernetes : Load Testing with Locust on GCP Kubernetes

    Docker & Kubernetes : From a monolithic app to micro services on GCP Kubernetes

    Docker & Kubernetes : Rolling updates

    Docker & Kubernetes : Deployments to GKE (Rolling update, Canary and Blue-green deployments)

    Docker & Kubernetes : Slack Chat Bot with NodeJS on GCP Kubernetes

    Docker & Kubernetes : Continuous Delivery with Jenkins Multibranch Pipeline for Dev, Canary, and Production Environments on GCP Kubernetes

    Docker & Kubernetes - MongoDB with StatefulSets on GCP Kubernetes Engine

    Docker & Kubernetes : Nginx Ingress Controller on minikube

    Docker & Kubernetes : Setting up Ingress with NGINX Controller on Minikube (Mac)

    Docker & Kubernetes : Nginx Ingress Controller for Dashboard service on Minikube

    Docker & Kubernetes : Nginx Ingress Controller on GCP Kubernetes

    Docker & Kubernetes : Kubernetes Ingress with AWS ALB Ingress Controller in EKS

    Docker & Kubernetes : MongoDB / MongoExpress on Minikube

    Docker & Kubernetes : Setting up a private cluster on GCP Kubernetes

    Docker & Kubernetes : Kubernetes Namespaces (default, kube-public, kube-system) and switching namespaces (kubens)

    Docker & Kubernetes : StatefulSets on minikube

    Docker & Kubernetes : StatefulSets on minikube

    Docker & Kubernetes : RBAC

    Docker & Kubernetes Service Account, RBAC, and IAM

    Docker & Kubernetes - Kubernetes Service Account, RBAC, IAM with EKS ALB, Part 1

    Docker & Kubernetes : Helm Chart

    Docker & Kubernetes : My first Helm deploy

    Docker & Kubernetes : Readiness and Liveness Probes

    Docker & Kubernetes : Helm chart repository with Github pages

    Docker & Kubernetes : Deploying WordPress and MariaDB with Ingress to Minikube using Helm Chart

    Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 2 Chart

    Docker & Kubernetes : Deploying WordPress and MariaDB to AWS using Helm 3 Chart

    Docker & Kubernetes : Helm Chart for Node/Express and MySQL with Ingress

    Docker & Kubernetes : Docker_Helm_Chart_Node_Expess_MySQL_Ingress.php

    Docker & Kubernetes: Deploy Prometheus and Grafana using Helm and Prometheus Operator - Monitoring Kubernetes node resources out of the box

    Docker & Kubernetes : Deploy Prometheus and Grafana using kube-prometheus-stack Helm Chart

    Docker & Kubernetes : Istio (service mesh) sidecar proxy on GCP Kubernetes

    Docker & Kubernetes : Istio on EKS

    Docker & Kubernetes : Istio on Minikube with AWS EC2 for Bookinfo Application

    Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part I)

    Docker & Kubernetes : Deploying .NET Core app to Kubernetes Engine and configuring its traffic managed by Istio (Part II - Prometheus, Grafana, pin a service, split traffic, and inject faults)

    Docker & Kubernetes : Helm Package Manager with MySQL on GCP Kubernetes Engine

    Docker & Kubernetes : Deploying Memcached on Kubernetes Engine

    Docker & Kubernetes : EKS Control Plane (API server) Metrics with Prometheus

    Docker & Kubernetes : Spinnaker on EKS with Halyard

    Docker & Kubernetes : Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

    Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-dind(docker-in-docker)

    Docker & Kubernetes: Multi-node Local Kubernetes cluster - Kubeadm-kind(k8s-in-docker)

    Docker & Kubernetes : nodeSelector, nodeAffinity, taints/tolerations, pod affinity and anti-affinity - Assigning Pods to Nodes

    Docker & Kubernetes : Jenkins-X on EKS

    Docker & Kubernetes : ArgoCD App of Apps with Heml on Kubernetes

    Docker & Kubernetes : ArgoCD on Kubernetes cluster

    Docker & Kubernetes : GitOps with ArgoCD for Continuous Delivery to Kubernetes clusters (minikube) - guestbook





    Ansible 2.0



    What is Ansible?

    Quick Preview - Setting up web servers with Nginx, configure environments, and deploy an App

    SSH connection & running commands

    Ansible: Playbook for Tomcat 9 on Ubuntu 18.04 systemd with AWS

    Modules

    Playbooks

    Handlers

    Roles

    Playbook for LAMP HAProxy

    Installing Nginx on a Docker container

    AWS : Creating an ec2 instance & adding keys to authorized_keys

    AWS : Auto Scaling via AMI

    AWS : creating an ELB & registers an EC2 instance from the ELB

    Deploying Wordpress micro-services with Docker containers on Vagrant box via Ansible

    Setting up Apache web server

    Deploying a Go app to Minikube

    Ansible with Terraform





    Terraform



    Introduction to Terraform with AWS elb & nginx

    Terraform Tutorial - terraform format(tf) and interpolation(variables)

    Terraform Tutorial - user_data

    Terraform Tutorial - variables

    Terraform 12 Tutorial - Loops with count, for_each, and for

    Terraform Tutorial - creating multiple instances (count, list type and element() function)

    Terraform Tutorial - State (terraform.tfstate) & terraform import

    Terraform Tutorial - Output variables

    Terraform Tutorial - Destroy

    Terraform Tutorial - Modules

    Terraform Tutorial - Creating AWS S3 bucket / SQS queue resources and notifying bucket event to queue

    Terraform Tutorial - AWS ASG and Modules

    Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server I

    Terraform Tutorial - VPC, Subnets, RouteTable, ELB, Security Group, and Apache server II

    Terraform Tutorial - Docker nginx container with ALB and dynamic autoscaling

    Terraform Tutorial - AWS ECS using Fargate : Part I

    Hashicorp Vault

    HashiCorp Vault Agent

    HashiCorp Vault and Consul on AWS with Terraform

    Ansible with Terraform

    AWS IAM user, group, role, and policies - part 1

    AWS IAM user, group, role, and policies - part 2

    Delegate Access Across AWS Accounts Using IAM Roles

    AWS KMS

    terraform import & terraformer import

    Terraform commands cheat sheet

    Terraform Cloud

    Terraform 14

    Creating Private TLS Certs





    AWS (Amazon Web Services)



    AWS : EKS (Elastic Container Service for Kubernetes)

    AWS : Creating a snapshot (cloning an image)

    AWS : Attaching Amazon EBS volume to an instance

    AWS : Adding swap space to an attached volume via mkswap and swapon

    AWS : Creating an EC2 instance and attaching Amazon EBS volume to the instance using Python boto module with User data

    AWS : Creating an instance to a new region by copying an AMI

    AWS : S3 (Simple Storage Service) 1

    AWS : S3 (Simple Storage Service) 2 - Creating and Deleting a Bucket

    AWS : S3 (Simple Storage Service) 3 - Bucket Versioning

    AWS : S3 (Simple Storage Service) 4 - Uploading a large file

    AWS : S3 (Simple Storage Service) 5 - Uploading folders/files recursively

    AWS : S3 (Simple Storage Service) 6 - Bucket Policy for File/Folder View/Download

    AWS : S3 (Simple Storage Service) 7 - How to Copy or Move Objects from one region to another

    AWS : S3 (Simple Storage Service) 8 - Archiving S3 Data to Glacier

    AWS : Creating a CloudFront distribution with an Amazon S3 origin

    AWS : Creating VPC with CloudFormation

    WAF (Web Application Firewall) with preconfigured CloudFormation template and Web ACL for CloudFront distribution

    AWS : CloudWatch & Logs with Lambda Function / S3

    AWS : Lambda Serverless Computing with EC2, CloudWatch Alarm, SNS

    AWS : Lambda and SNS - cross account

    AWS : CLI (Command Line Interface)

    AWS : CLI (ECS with ALB & autoscaling)

    AWS : ECS with cloudformation and json task definition

    AWS : AWS Application Load Balancer (ALB) and ECS with Flask app

    AWS : Load Balancing with HAProxy (High Availability Proxy)

    AWS : VirtualBox on EC2

    AWS : NTP setup on EC2

    AWS: jq with AWS

    AWS : AWS & OpenSSL : Creating / Installing a Server SSL Certificate

    AWS : OpenVPN Access Server 2 Install

    AWS : VPC (Virtual Private Cloud) 1 - netmask, subnets, default gateway, and CIDR

    AWS : VPC (Virtual Private Cloud) 2 - VPC Wizard

    AWS : VPC (Virtual Private Cloud) 3 - VPC Wizard with NAT

    AWS : DevOps / Sys Admin Q & A (VI) - AWS VPC setup (public/private subnets with NAT)

    AWS : OpenVPN Protocols : PPTP, L2TP/IPsec, and OpenVPN

    AWS : Autoscaling group (ASG)

    AWS : Setting up Autoscaling Alarms and Notifications via CLI and Cloudformation

    AWS : Adding a SSH User Account on Linux Instance

    AWS : Windows Servers - Remote Desktop Connections using RDP

    AWS : Scheduled stopping and starting an instance - python & cron

    AWS : Detecting stopped instance and sending an alert email using Mandrill smtp

    AWS : Elastic Beanstalk with NodeJS

    AWS : Elastic Beanstalk Inplace/Rolling Blue/Green Deploy

    AWS : Identity and Access Management (IAM) Roles for Amazon EC2

    AWS : Identity and Access Management (IAM) Policies, sts AssumeRole, and delegate access across AWS accounts

    AWS : Identity and Access Management (IAM) sts assume role via aws cli2

    AWS : Creating IAM Roles and associating them with EC2 Instances in CloudFormation

    AWS Identity and Access Management (IAM) Roles, SSO(Single Sign On), SAML(Security Assertion Markup Language), IdP(identity provider), STS(Security Token Service), and ADFS(Active Directory Federation Services)

    AWS : Amazon Route 53

    AWS : Amazon Route 53 - DNS (Domain Name Server) setup

    AWS : Amazon Route 53 - subdomain setup and virtual host on Nginx

    AWS Amazon Route 53 : Private Hosted Zone

    AWS : SNS (Simple Notification Service) example with ELB and CloudWatch

    AWS : Lambda with AWS CloudTrail

    AWS : SQS (Simple Queue Service) with NodeJS and AWS SDK

    AWS : Redshift data warehouse

    AWS : CloudFormation - templates, change sets, and CLI

    AWS : CloudFormation Bootstrap UserData/Metadata

    AWS : CloudFormation - Creating an ASG with rolling update

    AWS : Cloudformation Cross-stack reference

    AWS : OpsWorks

    AWS : Network Load Balancer (NLB) with Autoscaling group (ASG)

    AWS CodeDeploy : Deploy an Application from GitHub

    AWS EC2 Container Service (ECS)

    AWS EC2 Container Service (ECS) II

    AWS Hello World Lambda Function

    AWS Lambda Function Q & A

    AWS Node.js Lambda Function & API Gateway

    AWS API Gateway endpoint invoking Lambda function

    AWS API Gateway invoking Lambda function with Terraform

    AWS API Gateway invoking Lambda function with Terraform - Lambda Container

    Amazon Kinesis Streams

    Kinesis Data Firehose with Lambda and ElasticSearch

    Amazon DynamoDB

    Amazon DynamoDB with Lambda and CloudWatch

    Loading DynamoDB stream to AWS Elasticsearch service with Lambda

    Amazon ML (Machine Learning)

    Simple Systems Manager (SSM)

    AWS : RDS Connecting to a DB Instance Running the SQL Server Database Engine

    AWS : RDS Importing and Exporting SQL Server Data

    AWS : RDS PostgreSQL & pgAdmin III

    AWS : RDS PostgreSQL 2 - Creating/Deleting a Table

    AWS : MySQL Replication : Master-slave

    AWS : MySQL backup & restore

    AWS RDS : Cross-Region Read Replicas for MySQL and Snapshots for PostgreSQL

    AWS : Restoring Postgres on EC2 instance from S3 backup

    AWS : Q & A

    AWS : Security

    AWS : Security groups vs. network ACLs

    AWS : Scaling-Up

    AWS : Networking

    AWS : Single Sign-on (SSO) with Okta

    AWS : JIT (Just-in-Time) with Okta



    Jenkins



    Install

    Configuration - Manage Jenkins - security setup

    Adding job and build

    Scheduling jobs

    Managing_plugins

    Git/GitHub plugins, SSH keys configuration, and Fork/Clone

    JDK & Maven setup

    Build configuration for GitHub Java application with Maven

    Build Action for GitHub Java application with Maven - Console Output, Updating Maven

    Commit to changes to GitHub & new test results - Build Failure

    Commit to changes to GitHub & new test results - Successful Build

    Adding code coverage and metrics

    Jenkins on EC2 - creating an EC2 account, ssh to EC2, and install Apache server

    Jenkins on EC2 - setting up Jenkins account, plugins, and Configure System (JAVA_HOME, MAVEN_HOME, notification email)

    Jenkins on EC2 - Creating a Maven project

    Jenkins on EC2 - Configuring GitHub Hook and Notification service to Jenkins server for any changes to the repository

    Jenkins on EC2 - Line Coverage with JaCoCo plugin

    Setting up Master and Slave nodes

    Jenkins Build Pipeline & Dependency Graph Plugins

    Jenkins Build Flow Plugin

    Pipeline Jenkinsfile with Classic / Blue Ocean

    Jenkins Setting up Slave nodes on AWS

    Jenkins Q & A





    Puppet



    Puppet with Amazon AWS I - Puppet accounts

    Puppet with Amazon AWS II (ssh & puppetmaster/puppet install)

    Puppet with Amazon AWS III - Puppet running Hello World

    Puppet Code Basics - Terminology

    Puppet with Amazon AWS on CentOS 7 (I) - Master setup on EC2

    Puppet with Amazon AWS on CentOS 7 (II) - Configuring a Puppet Master Server with Passenger and Apache

    Puppet master /agent ubuntu 14.04 install on EC2 nodes

    Puppet master post install tasks - master's names and certificates setup,

    Puppet agent post install tasks - configure agent, hostnames, and sign request

    EC2 Puppet master/agent basic tasks - main manifest with a file resource/module and immediate execution on an agent node

    Setting up puppet master and agent with simple scripts on EC2 / remote install from desktop

    EC2 Puppet - Install lamp with a manifest ('puppet apply')

    EC2 Puppet - Install lamp with a module

    Puppet variable scope

    Puppet packages, services, and files

    Puppet packages, services, and files II with nginx Puppet templates

    Puppet creating and managing user accounts with SSH access

    Puppet Locking user accounts & deploying sudoers file

    Puppet exec resource

    Puppet classes and modules

    Puppet Forge modules

    Puppet Express

    Puppet Express 2

    Puppet 4 : Changes

    Puppet --configprint

    Puppet with Docker

    Puppet 6.0.2 install on Ubuntu 18.04





    Chef



    What is Chef?

    Chef install on Ubuntu 14.04 - Local Workstation via omnibus installer

    Setting up Hosted Chef server

    VirtualBox via Vagrant with Chef client provision

    Creating and using cookbooks on a VirtualBox node

    Chef server install on Ubuntu 14.04

    Chef workstation setup on EC2 Ubuntu 14.04

    Chef Client Node - Knife Bootstrapping a node on EC2 ubuntu 14.04





    Elasticsearch search engine, Logstash, and Kibana



    Elasticsearch, search engine

    Logstash with Elasticsearch

    Logstash, Elasticsearch, and Kibana 4

    Elasticsearch with Redis broker and Logstash Shipper and Indexer

    Samples of ELK architecture

    Elasticsearch indexing performance



    Vagrant



    VirtualBox & Vagrant install on Ubuntu 14.04

    Creating a VirtualBox using Vagrant

    Provisioning

    Networking - Port Forwarding

    Vagrant Share

    Vagrant Rebuild & Teardown

    Vagrant & Ansible





    GCP (Google Cloud Platform)



    GCP: Creating an Instance

    GCP: gcloud compute command-line tool

    GCP: Deploying Containers

    GCP: Kubernetes Quickstart

    GCP: Deploying a containerized web application via Kubernetes

    GCP: Django Deploy via Kubernetes I (local)

    GCP: Django Deploy via Kubernetes II (GKE)





    Big Data & Hadoop Tutorials



    Hadoop 2.6 - Installing on Ubuntu 14.04 (Single-Node Cluster)

    Hadoop 2.6.5 - Installing on Ubuntu 16.04 (Single-Node Cluster)

    Hadoop - Running MapReduce Job

    Hadoop - Ecosystem

    CDH5.3 Install on four EC2 instances (1 Name node and 3 Datanodes) using Cloudera Manager 5

    CDH5 APIs

    QuickStart VMs for CDH 5.3

    QuickStart VMs for CDH 5.3 II - Testing with wordcount

    QuickStart VMs for CDH 5.3 II - Hive DB query

    Scheduled start and stop CDH services

    CDH 5.8 Install with QuickStarts Docker

    Zookeeper & Kafka Install

    Zookeeper & Kafka - single node single broker

    Zookeeper & Kafka - Single node and multiple brokers

    OLTP vs OLAP

    Apache Hadoop Tutorial I with CDH - Overview

    Apache Hadoop Tutorial II with CDH - MapReduce Word Count

    Apache Hadoop Tutorial III with CDH - MapReduce Word Count 2

    Apache Hadoop (CDH 5) Hive Introduction

    CDH5 - Hive Upgrade to 1.3 to from 1.2

    Apache Hive 2.1.0 install on Ubuntu 16.04

    Apache HBase in Pseudo-Distributed mode

    Creating HBase table with HBase shell and HUE

    Apache Hadoop : Hue 3.11 install on Ubuntu 16.04

    Creating HBase table with Java API

    HBase - Map, Persistent, Sparse, Sorted, Distributed and Multidimensional

    Flume with CDH5: a single-node Flume deployment (telnet example)

    Apache Hadoop (CDH 5) Flume with VirtualBox : syslog example via NettyAvroRpcClient

    List of Apache Hadoop hdfs commands

    Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 1

    Apache Hadoop : Creating Wordcount Java Project with Eclipse Part 2

    Apache Hadoop : Creating Card Java Project with Eclipse using Cloudera VM UnoExample for CDH5 - local run

    Apache Hadoop : Creating Wordcount Maven Project with Eclipse

    Wordcount MapReduce with Oozie workflow with Hue browser - CDH 5.3 Hadoop cluster using VirtualBox and QuickStart VM

    Spark 1.2 using VirtualBox and QuickStart VM - wordcount

    Spark Programming Model : Resilient Distributed Dataset (RDD) with CDH

    Apache Spark 2.0.2 with PySpark (Spark Python API) Shell

    Apache Spark 2.0.2 tutorial with PySpark : RDD

    Apache Spark 2.0.0 tutorial with PySpark : Analyzing Neuroimaging Data with Thunder

    Apache Spark Streaming with Kafka and Cassandra

    Apache Spark 1.2 with PySpark (Spark Python API) Wordcount using CDH5

    Apache Spark 1.2 Streaming

    Apache Drill with ZooKeeper install on Ubuntu 16.04 - Embedded & Distributed

    Apache Drill - Query File System, JSON, and Parquet

    Apache Drill - HBase query

    Apache Drill - Hive query

    Apache Drill - MongoDB query





    Redis In-Memory Database



    Redis vs Memcached

    Redis 3.0.1 Install

    Setting up multiple server instances on a Linux host

    Redis with Python

    ELK : Elasticsearch with Redis broker and Logstash Shipper and Indexer





    Powershell 4 Tutorial



    Powersehll : Introduction

    Powersehll : Help System

    Powersehll : Running commands

    Powersehll : Providers

    Powersehll : Pipeline

    Powersehll : Objects

    Powershell : Remote Control

    Windows Management Instrumentation (WMI)

    How to Enable Multiple RDP Sessions in Windows 2012 Server

    How to install and configure FTP server on IIS 8 in Windows 2012 Server

    How to Run Exe as a Service on Windows 2012 Server

    SQL Inner, Left, Right, and Outer Joins





    Git/GitHub Tutorial



    One page express tutorial for GIT and GitHub

    Installation

    add/status/log

    commit and diff

    git commit --amend

    Deleting and Renaming files

    Undoing Things : File Checkout & Unstaging

    Reverting commit

    Soft Reset - (git reset --soft <SHA key>)

    Mixed Reset - Default

    Hard Reset - (git reset --hard <SHA key>)

    Creating & switching Branches

    Fast-forward merge

    Rebase & Three-way merge

    Merge conflicts with a simple example

    GitHub Account and SSH

    Uploading to GitHub

    GUI

    Branching & Merging

    Merging conflicts

    GIT on Ubuntu and OS X - Focused on Branching

    Setting up a remote repository / pushing local project and cloning the remote repo

    Fork vs Clone, Origin vs Upstream

    Git/GitHub Terminologies

    Git/GitHub via SourceTree I : Commit & Push

    Git/GitHub via SourceTree II : Branching & Merging

    Git/GitHub via SourceTree III : Git Work Flow

    Git/GitHub via SourceTree IV : Git Reset

    Git Cheat sheet - quick command reference






    Subversion

    Subversion Install On Ubuntu 14.04

    Subversion creating and accessing I

    Subversion creating and accessing II








    Contact

    BogoToBogo
    contactus@bogotobogo.com

    Follow Bogotobogo

    About Us

    contactus@bogotobogo.com

    YouTubeMy YouTube channel
    Pacific Ave, San Francisco, CA 94115

    Pacific Ave, San Francisco, CA 94115

    Copyright © 2024, bogotobogo
    Design: Web Master