Tuesday, 26 December 2017

Java version not found

Java command not found although java is installed

[root@server]# java -version
-bash: java: command not found

Check in /etc/bashrc file path of java is defined or not.If multiple application using java then define all path separated by colon.

vim /etc/bashrc

PATH=/usr/java/jdk1.6.0_45/bin

save and exit.
then run below command.

#source ~/.bashrc

Run below command where your java files are located.

[root@server]# export PATH=$PATH:/opt/java/jdk1.6.0_45/bin/

[root@server]# java -version
java version "1.6.0_45"

Saturday, 16 December 2017

Difference between Ext3 and Ext4

Difference between ext3 and ext4

Wednesday, 6 December 2017

BASIC LINUX COMMANDS



In this article we would discuss some basic linux commands.

#pwd -It displays the present working directory.

#logname - It displays the initial user name. eg logname.

#clear - It clear the screen.

# exit - To logout from current session.

#date - It displays the current system date & time.

#who - It displays the list of user who are currently logged in to the server i.e. each user logname,        terminal no,date & time that the person logged in.

#whoami - same as who command.

# ls - to list files

#cp - To copy from one

eg. cp source location destination

# cat - To view files contents.

eg. cat file.txt

#scp - To copy securely

scp file.txt root@ip:/var/tmp/

# mv - To move file from one location to another

eg. mv source to destination

mv command is used to rename the file or directory also.

mv test nokia

# rm - To delete a file or directory

eg. rm file.txt

# mkdir - To create a directory.

eg. mkdir test

# touch - To create a file.

eg. touch 1.txt

#

How to Backup entire hard disk using dd command

Backup Entire Harddisk

To backup an entire copy of a hard disk to another hard disk connected to the same system, execute the dd command as shown below. In this dd command example, the UNIX device name of the source hard disk is /dev/hda, and device name of the target hard disk is /dev/hdb.
# dd if=/dev/sda of=/dev/sdb
  • “if” represents inputfile, and “of” represents output file. So the exact copy of /dev/sda will be available in /dev/sdb.
  • If there are any errors, the above command will fail. If you give the parameter “conv=noerror” then it will continue to copy if there are read errors.
  • Input file and output file should be mentioned very carefully, if you mention source device in the target and vice versa, you might loss all your data.
In the copy of hard drive to hard drive using dd command given below, sync option allows you to copy everything using synchronized I/O.
# dd if=/dev/sda of=/dev/sdb conv=noerror,sync

Example 2. Create an Image of a Hard Disk

Instead of taking a backup of the hard disk, you can create an image file of the hard disk and save it in other storage devices.There are many advantages to backing up your data to a disk image, one being the ease of use. This method is typically faster than other types of backups, enabling you to quickly restore data following an unexpected catastrophe.
# dd if=/dev/hda of=~/hdadisk.img
The above creates the image of a harddisk /dev/hda. Refer our earlier article How to view initrd.image for more details.

Example 3. Restore using Hard Disk Image

To restore a hard disk with the image file of an another hard disk, use the following dd command example.
# dd if=hdadisk.img of=/dev/hdb
The image file hdadisk.img file, is the image of a /dev/hda, so the above command will restore the image of /dev/hda to /dev/hdb.

Example 4. Creating a Floppy Image

Using dd command, you can create a copy of the floppy image very quickly. In input file, give the floppy device location, and in the output file, give the name of your floppy image file as shown below.
# dd if=/dev/fd0 of=myfloppy.img

Example 5. Backup a Partition

You can use the device name of a partition in the input file, and in the output either you can specify your target path or image file as shown in the dd command example below.
# dd if=/dev/hda1 of=~/partition1.img

Example 6. CDROM Backup

dd command allows you to create an iso file from a source file. So we can insert the CD and enter dd command to create an iso file of a CD content.
# dd if=/dev/cdrom of=tgsservice.iso bs=2048
dd command reads one block of input and process it and writes it into an output file. You can specify the block size for input and output file. In the above dd command example, the parameter “bs” specifies the block size for the both the input and output file. So dd uses 2048bytes as a block size in the above command.
Note: If CD is auto mounted, before creating an iso image using dd command, its always good if you unmount the CD device to avoid any unnecessary access to the CD ROM.

How to configure swap space ?


There are two ways to configure swap space you can either use a dedicated hard drive partition to add new swap space, or create a swap file on an existing filesystem and use it as swap space.

Free command displays the swap space. free -k shows the output in KB.
# free -k
             total       used       free     shared    buffers     cached
Mem:       3082356    2043700    1038656          0      50976    1646268
-/+ buffers/cache:     346456    2735900
Swap:      4192956          0    4192956
Swapon command with option -s, displays the current swap space in KB.
# swapon -s
Filename                        Type            Size    Used    Priority
/dev/sda2                       partition       4192956 0       -1
Swapon -s, is same as the following.
# cat /proc/swaps
Filename                        Type            Size    Used    Priority
/dev/sda2                       partition       4192956 0       -1

Method 1: Use a Hard Drive Partition for Additional Swap Space

If you have an additional hard disk, (or space available in an existing disk), create a partition using fdisk command. Let us assume that this partition is called /dev/sdc1
Now setup this newly created partition as swap area using the mkswap command as shown below.

# mkswap /dev/sdc1
Enable the swap partition for usage using swapon command as shown below.
# swapon /dev/sdc1
To make this swap space partition available even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab
/dev/sdc1               swap                    swap    defaults        0 0
Verify whether the newly created swap area is available for your use.
# swapon -s
Filename                        Type            Size    Used    Priority
/dev/sda2                       partition       4192956 0       -1
/dev/sdc1                       partition       1048568 0       -2

# free -k
             total       used       free     shared    buffers     cached
Mem:       3082356    3022364      59992          0      52056    2646472
-/+ buffers/cache:     323836    2758520
Swap:      5241524          0    5241524
Note: In the output of swapon -s command, the Type column will say “partition” if the swap space is created from a disk partition.

Method 2: Use a File for Additional Swap Space

If you don’t have any additional disks, you can create a file somewhere on your filesystem, and use that file for swap space.
The following dd command example creates a swap file with the name “myswapfile” under /root directory with a size of 1024MB (1GB).
# dd if=/dev/zero of=/root/myswapfile bs=1M count=1024
1024+0 records in
1024+0 records out

# ls -l /root/myswapfile
-rw-r--r--    1 root     root     1073741824 Aug 14 23:47 /root/myswapfile
Change the permission of the swap file so that only root can access it.
# chmod 600 /root/myswapfile
Make this file as a swap file using mkswap command.
# mkswap /root/myswapfile
Setting up swapspace version 1, size = 1073737 kB
Enable the newly created swapfile.
# swapon /root/myswapfile
To make this swap file available as a swap area even after the reboot, add the following line to the /etc/fstab file.
# cat /etc/fstab
/root/myswapfile               swap                    swap    defaults        0 0
Verify whether the newly created swap area is available for your use.
# swapon -s
Filename                        Type            Size    Used    Priority
/dev/sda2                       partition       4192956 0       -1
/root/myswapfile                file            1048568 0       -2

# free -k
             total       used       free     shared    buffers     cached
Mem:       3082356    3022364      59992          0      52056    2646472
-/+ buffers/cache:     323836    2758520
Swap:      5241524          0    5241524
Note: In the output of swapon -s command, the Type column will say “file” if the swap space is created from a swap file.
If you don’t want to reboot to verify whether the system takes all the swap space mentioned in the /etc/fstab, you can do the following, which will disable and enable all the swap partition mentioned in the /etc/fstab
# swapoff -a

# swapon -a

Tuesday, 5 December 2017

FILE HIERARCHY SYSTEM

FILE HIERARCHY SYSTEM

/root -> This is the root directory of the file system.The main directory of the entire file system and the root directory of the entire file system and the root directory for the super user.

/bin -> bin stands for binary.This directory contains executable files for most of the UNIX commands. This is used by normal user also.

/sbin -> This directory contains all system admin executable files or commands.This contains programs used in booting the system and in system recovery.'

/etc -> This directory contains all system configuration files and the files which maintains information about users and groups. Example :- Group name,GID,username,UID.

/usr -> This is user name directory.Default directory provided by UNIX OS to create users home directories and contains manual pages.

/tmp -> This directory contains all temporary files used by the UNIX system.System on users creates temporary files which will be removed when the server reboots.

/dev -> This directory contains all logical device files it maintains device for information's directory.Example :- All disk devices are in the sub directory /dev/dsk.

/devices-> This directory contains physical devices files.

/home ->  Default directory allocated for the home directory of normal users when the administrator don't specify any other directory. If your logname is root(or) sai, your default home directory is /home/root (or) sai.

/var ->  This directory contains all system log files and message files.

/lib -> This directory contains all the library functions provided by UNIX programmers.

/kernel -> This directory contains platform independent loadable kernel modules required as part of the boot process.

/mnt -> It contains entries for removable (mountable) media such as cd-roms and DLT Tapes.


Saturday, 2 December 2017

CLUSTERING

 Cluster
Clustering is establishing connectivity among two or more servers in order to make it work like one. Clustering is a very popular technic among Sys-Engineers that they can cluster servers as a failover system, a load balance system or a parallel processing unit.

A failover cluster is a set of servers that works together to maintain the high availability of applications and services.

Advantages of Clustering Servers
Clustering servers is completely a scalable solution. You can add resources to the cluster afterwards.
If a server in the cluster needs any maintenance, you can do it by stopping it while handing the load over to other servers.
Among high availability options, clustering takes a special place since it is reliable and easy to configure. In case of a server is having a problem providing the services furthermore, other servers in the cluster can take the load.

Disadvantages of Clustering Servers
Cost is high. Since the cluster needs good hardware and a design, it will be costly comparing to a non-clustered server management design. Being not cost effective is a main disadvantage of this particular design.
Since clustering needs more servers and hardware to establish one, monitoring and maintenance is hard. Thus increase the infrastructure.

Now let’s see what kind of packages/installations we need to configure this setup successfully. The following packages/RPMs can be downloaded by rpmfind.net.

Ricci (ricci-0.16.2-75.el6.x86_64.rpm)
Luci (luci-0.26.0-63.el6.centos.x86_64.rpm)
Mod_cluster (modcluster-0.16.2-29.el6.x86_64.rpm)
CCS (ccs-0.16.2-75.el6_6.2.x86_64.rpm)
CMAN(cman-3.0.12.1-68.el6.x86_64.rpm)
Clusterlib (clusterlib-3.0.12.1-68.el6.x86_64.rpm)

Let’s see what each installation does for us and their meanings.

Ricci is a daemon which used for cluster management and configurations. It distributes/dispatches receiving messages to the nodes configured.

Luci is a server that runs on the cluster management server and communicates with other multiple nodes. It provides a web interface to make things easier.

Mod_cluster is a load balancer utility based on httpd services and here it is used to communicate the incoming requests with the underlying nodes.

CCS is used to create and modify the cluster configuration on remote nodes through ricci. It is also used to start and stop the cluster services.

CMAN is one of the primary utilities other than ricci and luci for this particular setup, since this acts as the cluster manager. Actually, cman stands for CLUSTER MANAGER. It is a high-availability add-on for RedHat which is distributed among the nodes in the cluster.

Install below packages in all three servers.

 Step 1: Installing Clustering in Linux
 # yum install “ricci”

After ricci installation is done, we can see it has installed mod_cluster and cluster lib as its dependencies.
 Next install luci using yum install “luci” command.

 # yum install "luci"
 After the installation of luci, you can see it has installed the dependencies it needed.
 Now, let’s install ccs package in the servers. For that I entered yum install ccs.x86_64 which is shown in the list when I issued yum list |grep “ccs” or else you can simply issue yum install “ccs”.

# yum install “ccs”

Let’s install cman as the last requirement for this particular setup. The command is yum install “cman” or yum install cman.x86_64 as shown in the yum list as I mentioned earlier.

# yum install “cman”

We need to confirm the installations are in place. Issue below command to see whether the packages we needed are installed properly in all three servers.

# rpm -qa | egrep "ricci|luci|modc|cluster|ccs|cman"

Perfect all the packages are installed and all we need to do is configuring the setup.

Step 2: Configure Cluster in Linux
1. As the first step for setting up the cluster, you need to start the ricci service on all three servers.

# service ricci start
OR
# /etc/init.d/ricci start

2. Since ricci is started in all servers, now it’s time to create the cluster. This is where ccs package comes to our help when configuring the cluster.

If you don’t want to use ccs commands then you will have to edit the “cluster.conf” file for adding the nodes and do other configs. I guess easiest way is to use following commands. Let’s have a look.

Since I haven’t created the cluster yet, there’s no cluster.conf file created in /etc/cluster location yet as shown below.

# cd /etc/cluster
# pwd
# ls

In my case, I do this in 172.16.1.250 which is dedicated for cluster management. Now onwards, everytime we try to use ricci server, it will ask for ricci’s password. So you will have to set the password of ricci user in all servers.

Enter passwords for ricci user.

# passwd ricci

Now enter the command as shown below.

# ccs -h 172.16.1.250 --createcluster linuxtech_cluster
You can see after entering above command, cluster.conf file is created in /etc/cluster directory.

3. Now let’s add the two nodes to the system. In here also we use ccs commands to make the configurations. I’m not going to manually edit the cluster.conf file but use the following syntax.

# ccs -h 172.16.1.250 --addnode 172.16.1.222

Add the other node too.

# ccs -h 172.16.1.250 --addnode 172.16.1.223
You also can enter below command to verify node details.

# ccs –h 172.16.1.250 --lsnodes

================================
Fencing and Adding a Failover to Clustering
================================

What is Fencing?

If we think of a setup with more than one nodes, it is possible that one or more nodes can be failed at some point of time. So in this case fencing is isolating the malfunctioning server from the cluster in order to protect and secure the synced resources. Therefore we can add a fence to protect the resources shared within the cluster.

What is Failover?
Imagine a scenario, where a server has important data for an organization which the stakeholders need the organization to keep the server up and running without any down time experienced. In this case we can duplicate the data to another server (now there are two servers with identical data and specs) which we can use as the fail-over.

By any chance, one of the servers goes down, the other server which we have configured as the fail-over will take over the load and provides the services which were given by the first server. In this method, users will not be experienced the down time period which was caused to the primary server.

Step 1: How to Add Fencing to Cluster Server

1. First we have to enable fencing on the cluster server, for this I will use below two commands.

# ccs -h 172.16.1.250 --setfencedaemon post_fail_delay=0
# ccs -h 172.16.1.250 --setfencedaemon post_join_delay=10

As you can see we use ccs command to add the configurations to cluster. Following are definitions of the options I have used in the command.

-h: Cluster host IP address.
–setfencedaemon: Applies the changes to the fencing daemon.
post_fail_delay: Time in seconds which the daemon waits before fencing a victim server when a node has been failed.
post_join_delay: Time in seconds which the daemon waits before fencing victim server when a node has joined the cluster.
2. Now let’s add a fence device for our cluster, execute below command to add a fence device.

# ccs -h 172.16.1.250 --addfencedev linuxtechfence agent=fence_virt

# ccs -h 172.16.1.250 --lsfenceopts

Step 2: Add Two Nodes to Fence Device
3. Now I’m going to add a method to the created fence device and add hosts in to it.

# ccs -h 172.16.1.250 --addmethod Method01 172.16.1.222
# ccs -h 172.16.1.250 --addmethod Method01 172.16.1.223

You have to add the methods you have created while ago for the both nodes you have in your setup.

4. As the next step, you will have to add the fence methods you created for the both nodes, to the fence device we created namely “linuxtechfence”.

# ccs -h 172.16.1.250 --addfenceinst linuxtechfence 172.16.1.222 Method01
# ccs -h 172.16.1.250 --addfenceinst linuxtechfence 172.16.1.223 Method01
I have successfully associated my methods with the fence device and this is how my cluster.conf looks like now.

Step 3: Add Failover to Cluster Server
5. I use below syntax of commands to create my fail-over to the cluster setup.

# ccs -h 172.16.1.250 --addfailoverdomain linuxtechfod ordered

6. As you have created the fail-over domain, now you can add two nodes to it.

# ccs -h 172.16.1.250 --addfailoverdomainnode linuxtechfod 172.16.1.222 1
# ccs -h 172.16.1.250 --addfailoverdomainnode linuxtechfod 172.16.1.223 2

=======================================================================
How to Sync Cluster Configuration and Verify Failover Setup in Nodes
========================================================================
We will start by adding resources to the cluster. In this case we can add a file system or a web service as your need. Now I have /dev/sda3 partition mounted to /x01 which I wish to add as a file system resource.

1. I use below command to add a file system as a resource:

# ccs -h 172.16.1.250 --addresource fs name=my_fs device=/dev/mapper/linuxtechtest_lv_vol01 mountpoint=/x01 fstype=ext3

Additionally, if you want to add a service also, you can by using below methodology. Issue the following command.

# ccs -h 172.16.1.250 --addservice my_web domain=testdomain recovery=relocate autostart=1
You can verify it by viewing the cluster.conf file as we did in previous lessons.

2. Now enter following entry in cluster.conf file to add a reference tag to the service.

<fs ref="my_fs"/>

3. All set. No we will see how we can sync the configurations we made to cluster among the 2 nodes we have. Following command will do the needful.

# ccs -h 172.16.1.250 --sync --activate

Note: Enter passwords we set for ricci in the early stages when we were installing packages.

You can verify your configurations by using below command.

# ccs -h 172.16.1.250 --checkconf
4. Now it’s time to start the things up. You can use one of below commands as you prefer.

To start only one node use the command with relevant IP.

# ccs -h 172.16.1.222 start
Or if you want to start all nodes use --startall option as follows.

# ccs -h 172.16.1.250 –startall

You can use stop or --stopall if you needed to stop the cluster.

In a scenario like if you wanted to start the cluster without enabling the resources (resources will automatically be enabled when the cluster is started), like a situation where you have intentionally disabled the resources in a particular node in order to disable fencing loops, you don’t want to enable those resources when the cluster is starting.

For that purpose you can use below command which starts the cluster but does not enable the resources.

# ccs -h 172.16.1.250 --startall --noenable

5. After the cluster has been started up, you can view the stats by issuing clustat command.

# clustat

The options to restart a server using the ILO are contained under the virtual power tab. You can do one of four options:

Momentary press – this is like pressing the power button on your laptop/pc or server, with Windows 2000 and above (if I remember correctly), it should cleanly shut down the operating system
Press and hold – this is like pressing and holding the power button, it will turn off the computer completely and will not cleanly shut down the server
Cold boot of the system – this effectively powers off the server and restarts it.
Reset system – this is the ILO way of rebooting it without cleanly restarting the operating system, if your server has hung and wont respond, this might be the option you select.

APACHE TOMCAT

APACHE TOMCAT INSTALLATION
============================

Install Apache Tomcat 9 on centos 7/RHEL 7
=============================================
step 1:- Install Java 8 on Centos7.x
=====================================
[root@cloudworld ~]# yum install java-1.8.0

Run the beneath command to check Java version :

[root@cloudworld ~]# java -version
openjdk version "1.8.0_101"
OpenJDK Runtime Environment (build 1.8.0_101-b13)
OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)
[root@cloudworld ~]#

Step:2 Download the Apache Tomcat 9 tar.gz file
================================================
Downloading Tomcat from the terminal

[root@cloudworld ~]# wget http://www-eu.apache.org/dist/tomcat/tomcat-9/v9.0.0.M9/bin/apache-tomcat-9.0.0.M9.tar.gz

Step:3 Extract the downloaded file and set CATALINA_HOME variable
==================================================================
Run the beneath tar command to extract Apache tomcat 9 under the /opt folder.

[root@cloudworld ~]# tar -zxpvf apache-tomcat-9.0.0.M9.tar.gz -C /opt/
[root@cloudworld ~]# cd /opt/
[root@cloudworld opt]# mv apache-tomcat-9.0.0.M9/ tomcat

Before starting the Tomcat Service let’’s first set the required CATALINA_HOME environment variable using below commands :

[root@cloudworld ~]# echo "export CATALINA_HOME='/opt/tomcat/'" >> ~/.bashrc
[root@cloudworld ~]# source ~/.bashrc
[root@cloudworld ~]#

Step:4 Specify the Users for Manager GUI Page and Admin Page Access.
====================================================================

By default no user or account is allowed to access Manager GUI Page and Admin Page. So to grant access to the users add the following lines in the file “/opt/tomcat/conf/tomcat-users.xml” just above <tomcat-users> tag

<!-- User linuxtechi who can access only manager section -->
<role rolename="manager-gui" />
<user username="linuxtechi" password="<Enter-Secure-Password>" roles="manager-gui" />

<!-- User Admin Who can access manager and admin section both -->
<role rolename="admin-gui" />
<user username="admin" password="<Enter-Secure-Password>" roles="admin-gui" />

Step:5 Start Tomcat Service
============================
Run the beneath commands one after the another to start tomcat service.

[root@cloudworld ~]# cd /opt/tomcat/bin/
[root@cloudworld bin]# ./startup.sh

To shutdown the tomcat service use below command

[root@cloudworld bin]# ./shutdown.sh
Open the tomcat ports in firewall.

As we know by default tomcat runs on 8080 port so in case firewall is running on your Linux box then set the following rule to open 8080 port on CentOS 7.x and RHEL 7.x.

[root@cloudworld ~]# firewall-cmd --permanent --zone=public --add-port=8080/tcp
success
[root@cloudworld ~]# firewall-cmd --reload
success
[root@cloudworld ~]#
Use the below command to open 8080 port in Ubuntu 16.04 / 16.10 firewall.

root@ubuntu:~# ufw allow 8080
Rule added
Rule added (v6)
root@ubuntu:~#
Note: You can skip this step if firewall is disabled on your server.

Step:6 Access Apache Tomcat 9 page from the Web Browser
==========================================================

Open the web broswer type the following URL :

http://{ip-address-or-Hostname}:8080

In my case ip address of my server is “192.168.43.9”

Apache-Tomcat-9-Page-CentOS7-RHEL7

Click on “Manager App” , It will prompt us for the User name and password, specify the user’s credentials whatever we set in the file ‘tomcat-users.xml‘

Apache-Tomcat-9-Manager-App-Credentials

Similarly We can view the Server Status by clicking on the option “Server Status”

Scenario : if you have installed Tomcat on remote server and want to access ‘Manger App’ and ‘Server Status’ on your local system’s web browser for that we need to create a file “manager.xml” under the folder “/opt/tomcat/conf/Catalina/localhost/” with the following content.

root@cloudworld ~]# cd /opt/tomcat/conf/Catalina/localhost/
[root@cloudworld localhost]# vi manager.xml
<Context privileged="true" antiResourceLocking="false"
  docBase="${catalina.home}/webapps/manager">
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="^.*$" />
</Context>
Save and Exit the file