How to ensure continuity of business in the IT scenario:

Being accessible at all times with minimal or no downtime over a specified period of assessment (ie year), the IT system is eligible to be called Highly Available (HA).
HA is usually specified in terms of uptime such as 99.9% in a year or month and includes any planned shutdown for maintenance and system upgrades. Obviously, a higher cost is associated with higher time figures.

Do you serve users only in certain geography / time zones or should it be available globally?Do you only provide services in fixed time windows (such as 9 am to 6 pm) or are they available 24 to 7?Do you experience peak loads such as festive break periods?What is the risk of revenue loss of systems going down for a certain period of time depending on your business domain (retail, banking, etc.)?What percentage of IT budget can you spend compared to revenue model?Given below are some of the Vastu best practices for maintaining high availability.

                                                                              Infographic-01

Sounds too technical? Let’s look at a simple example,

If a company has a single instance of an email server serving 1000 employees, which goes down due to an incident in the data center failure, there will be a single point of failure for the email system. However, this can be avoided by having a redundant example of keeping the email server running. Obviously both of these instances need to be configured to install resuscitation, so that the primary instance can occupy the service in the event of a failure. As mentioned earlier both examples should be hosted independently and separated from each other.

How to get high availability (HA)?

The main categories of IT infrastructure resources are network and storage. Due to the latest advancements in technology, they can be controlled by software ie automated systems, on-premises or on cloud or a mixture of both. Therefore, maintaining HA becomes challenging in combination with such complex infrastructure.

              A graphic below shows the various options. In most cases a combination is applied:

    Clustering

MeansForHA

Clustering – A cluster is a logical group of interconnected independent servers (called nodes) that provide services to their clients with failover and redundancy features. The cluster implements most of the above ha practices and is usually a pre-built solution provided by the server vendor, although it is possible to build a cluster here. services get a similar experience and the cluster appears to be a single logical server.

Therefore, multiple databases, application servers, web servers can be clustered to provide HA. However, clustering is historically an “on-premises” solution. Looking at the latest trends, many IT operations have migrated to the cloud as a “pay per use” model and a “data center as a service” model.

 They can be a public cloud like Amazon AWS, Microsoft Azure,

or a private cloud built using tools like OpenStack. The difference here is that some of the box features provided in clustering are not available here, but various tools and APIs are being developed that can provide each feature individually such as state persistence, caching / storage and so on. A myriad of tools in the DevOps field are used to provide effective HA using dynamic server farms.

Redundancy using overheat, hot or cold standby servers –

redundancy is set up in normal server cases by placing them in separate server instances, which remain inactive in normal operating cases and become active and in place of existing active servers The cases start requesting while failing. These redundancy standby instances may occur in hot and cold or hot states depending on uptime and disaster recovery requirements.

The hot server is an active server member capable of ever meeting requests, but the load balancer considers it redundant and only divert requests in case of failure. The Warm Server is equipped with the necessary software installed and is up and running but does not make any service requests. Cold server is similar to Warm except that it is not running.

Load-balancing –

A load balancer is a tool that handles all server requests and diverts each request to an active node (server) based on an algorithm such as round-robin, with least configured configuration Etc.  Load balancers help in implementing many HA practices.

Patriciahttp://quitewish.com
Patricia a expert content creator and SEO expert having Proven record of excellent writing demonstrated in a professional portfolio Impeccable grasp of the English language, including press releases and current trends in slang and details.

Similar Articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Advertisment

Instagram

Most Popular