Redundancy in System Design

Redundancy is defined as a concept where certain entities are duplicated with aim to scale up the system and reduce over all down-time

For example, as seen in the image below, we are duplicating the server. So if one server goes down, then we have a redundant server in our system to balance the load.

Redundancy in System Design

Now from the above image, you must be wondering how these connections are handled meaning how to get all load over to another and not let to connect to server/s that are already down. Here we introduce a new term known as the load balancer

Load Balancer

A load balancer works as a “traffic cop” sitting in front of your server and routing client requests across all servers. It simply distributes the set of requested operations (database write requests, cache queries) effectively across multiple servers and ensures that no single server bears too many requests that lead to degrading the overall performance of the application. A load balancer can be a physical device or a virtualized instance running on specialized hardware or a software process. 
Consider a scenario where an application is running on a single server and the client connects to that server directly without load balancing. It will look something like the one below.

Load Balancer

How to handle the unavailability of a Load Balancer?

If the load balancer becomes unavailable, then the corresponding server will become unavailable and the system will go into downtime. In order to handle such cases, we do opt for two techniques:

  1. Way 1: Using backup load balancer technique: It contains primary and secondary load balancers involving concepts of ‘floating ID’ and ‘Health check’.  
  2. Way 2: Using DNS Server: For now newbies to understand associate this works quite similar to the redundancy principle.

Note: Geeks do remember that DNS does not keep track of  whether load balancer is working so do we introduce Monitor  to keep track in case of DNS. 

Tip: In system designing failures are inevitable, we could not eliminate them completely can only  work in minimizing them. 

While considering the availability factor while designing systems is directly proportional to geographic location. For instance: For a corresponding service if a system goes down in a particular location (be India) the whole service is available at some other location to make the service operational. In the real world, we are having the complete hardware available across various locations so as not to hamper service at any cost.    

Important Key Concepts and Terminologies – Learn System Design

System Design is the core concept behind the design of any distributed systems. System Design is defined as a process of creating an architecture for different components, interfaces, and modules of the system and providing corresponding data helpful in implementing such elements in systems.

In this article, we’ll cover the standard terms and key concepts of system design and performance, such as:

  • Latency, 
  • Throughput, 
  • Availability,
  • Redundancy,
  • Time
  • CAP Theorem
  • Lamport’s Logical Clock Theorem.

Important Key Concepts and Terminologies In System Design – Learn System Design

Let us see them one by one.

Similar Reads

Throughput in System Design

Throughput is defined as the measure of amount of data transmitted successfully in a system, in a certain amount of time. In simple terms, throughput is considered as how much data is transmitted successfully over a period of time....

Latency in System Design

Latency is defined as the amount of time required for a single data to be delivered successfully. Latency is measured in milliseconds (ms)....

Availability in System Design

Availability is the percentage of time the system is up and working for the needs....

Redundancy in System Design

Redundancy is defined as a concept where certain entities are duplicated with aim to scale up the system and reduce over all down-time....

Consistency in System Design

Consistency is referred to as data uniformity in systems....

Time in System Design

Time is a measure of sequences of events happening which is measured here in seconds in its SI unit.  It is measured using a clock which is of two types: Physical Clock: responsible for the time between systems.   Logical Clock: responsible for the time within a system....

CAP Theorem In System Design

Three desirable characteristics of distributed systems with replicated data are referred to as CAP: partition tolerance, availability, and consistency (among replicated copies) (in the face of the nodes in the system being partitioned by a network fault). According to this theorem, in a distributed system with data replication, it is not possible to ensure all three of the required properties—consistency, availability, and partition tolerance—at the same time. It claims that only two of the three properties stated below can be supported strongly by networked shared-data systems:...

Lamport’s Logical Clock Theorem

Lamport’s Logical Clock is a process to ascertain the sequence in which events take place. It acts as the foundation for the more complex Vector Clock Algorithm. A logical clock is required because a distributed operating system (Lamport) lacks a global clock....