Monday, September 12, 2016

Cluster

Home


Cluster

In a computer system, a cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing

                                   


In computers, clustering is the use of multiple computers, typically PCs or UNIX workstations, multiple storage devices, and redundant interconnections, to form what appears to users as a single highly available system. Cluster computing can be used for load balancing as well as for high availability. Clustering approach can help an enterprise achieve  99.99 availability in some cases. One of the main ideas of cluster computing is that, to the outside world, the cluster appears to be a single system.

A common use of cluster computing is to load balance traffic on high-traffic computing sites. A client pc request is sent to a "manager" server, which then determines which of several identical or very similar instance servers to forward the request to for handling. Having a Central Instances allows traffic to be handled more quickly.

Clustering has been available since the 1980s when it was used in DEC's VMS systems. IBM's simplex is a cluster approach for a mainframe system. Microsoft, Sun Microsystems, and other leading hardware and software companies offer clustering packages that are said to offer scalability as well as availability. As traffic or availability assurance increases, all or some parts of the cluster can be increased in size or number.

Cluster computing can also be used as a relatively low-cost form of parallel processing for scientific and other applications that lend themselves to parallel operations
How Cluster Works

The private network is the key to the failover-management software's recognizing when a system or process has failed because it carries the heartbeat for the cluster. A two-node high-availability cluster should be interconnected using crossover network cables between redundant NICs in each server. Using crossover cables eliminates a potential point of failure at a hub or switch. High-availability clusters with more than two nodes will need to use a hub or switch to interconnect the nodes. Speed is not critical; 10-Mbps Ethernet will do fine. Even when all the pieces of the cluster are functioning, failover is not immediate when a system goes down.

Master Cluster server sends ping to its slave server and checks whether slave cluster server is alive or not. Simlerly it also sends pings to DB and checks the alive status.

Simlerly slave cluster server sends ping to master server and checks the heartbeat status. Suppose slave server sends ping to master cluster server and found that master cluster server is not responding then it start failover task.

1. It demount database and file structure from master cluster and mount it on its root
2. It also shifts the logical IP of master Central Instance (CI).

Technically  first, the failover-management software needs to recognize via the heartbeat that a process has failed.

The disk space needs to be dismounted, and the IP address needs to be "deallocated" from the failed system, then remounted and reallocated on the failover system.

Finally, the application process needs to be restarted on the failover system and the clients need to reconnect. Depending on the state of the application, users may need to reauthenticate or they may just notice a delay between operations.


Most of the major Unix vendors have their own high-availability solutions. For example, Sun Microsystems' Cluster 2.2  contains cluster-aware solutions for many common applications, such as the Oracle and Informix databases, SAP R/3, and the Netscape-Sun Alliance iPlanet messaging and Web servers. These solutions modify the behavior of the applications, so if they fail on one machine in the cluster, they can resume on another.

No comments:

Post a Comment