Home » IT & Tech Blogs » Information Technology » Datacenter » A Guide to DNS-Based Load Balancing

A Guide to DNS-Based Load Balancing

Modern high-traffic websites such as Facebook and Google serve billions of requests every day, which equals to millions of requests per minute.

In fact, “Google now processes over 40,000 search queries every second on average (visualize them here), which translates to over 3.5 billion searches per day and 1.2 trillion searches per year worldwide,according to Internet Live Stats.

With this humongous number of requests, Google needs a complex set of web servers and load balancing solutions to distribute the incoming requests among those hundreds to thousands of web servers.

The reason being, a single web server can serve only a limited number of web requests due to its limited hardware resources. That is why high-traffic websites set up data centers and web server farms to serve such a humongous number of requests.

This raises the question: “How is an incoming request forwarded to one of the servers?” Or, let’s say, “How is it made sure that incoming requests are distributed among the list of web servers so that a given web server does not get overloaded due to unexpected heavy load?”

What is Load Balancing?

Load balancing is the process of distributing incoming network requests over a list of web servers — also known as web server farm or web server pool. This process helps to cost-effectively scale and meet high volumes of requests coming to any modern application or website. It is handled by load balancers, which act as traffic cops sitting between incoming requests and web servers, routing the requests across the web servers.

A load balancer utilizes algorithms to perform efficient routing to improve speed and maximize utilization of the servers while making sure that no single server is overloaded. If any web server goes down, the load balancer redirects traffic to the other online servers. If a new web server is added, the load balancer auto-starts sending requests to it.

There are numerous types of load balancing solutions with each working differently in some sort. Among the popular types of load balancing solutions, DNS-based load balancing has gained popularity in recent years. The reason being, it is one of the easiest ways of load balancing.

DNS-Based Load Balancing

According to Cloudflare, “DNS-based load balancing is a specific type of load balancing that uses the DNS to distribute traffic across several servers. It does this by providing different IP addresses in response to DNS queries. Load balancers can use various methods or rules for choosing which IP address to share in response to a DNS query.” Domain Name System (DNS) is usually referred to as the Internet’s phonebook or yellow pages service that helps people to find a website’s address, i.e., it helps convert a domain name (say, “google.com”) to the address of its primary web server(s) — 142.250.66.238 being one of them.

This group of numbers is called IP address, which is the digital address of any computer or device connected to the Internet. In DNS-based load balancing, the load balancer sends requests from specific sets of IP addresses to a specific server. For example, a DNS-based load balancer may direct all requests from addresses starting from 140 to 149 to web server A and 150 to 159 to web server B. This is a super basic example of the working of a DNS-based load balancing solution.

That is, the DNS-based load balancing solutions direct a set of incoming network requests to the IP address of web server A and the others to the IP address of web server B. That means a set of requests for “google.com” will get directed to the web server A having IP address 142.250.66.238 while the other set of requests for “google.com” will get directed to the webserver B having IP address 152.250.66.238 (notice the starting numbers in these two IP addresses). But how do the load balancers select a web server out of the list of servers? They use one or more load balancing techniques such as round-robin, weighted round-robin, least connection, weighted least connection, resource-based (adaptive), resource-based (SDN-adaptive), fixed weighting, weighted response time, and more.

Round Robin Load Balancing

Round robin is one of the most popular techniques utilized in load balancers. “Round‑robin load balancing is one of the simplest methods for distributing client requests across a group of servers.

Going down the list of servers in the group, the roundrobin load balancer forwards a client request to each server in turn. When it reaches the end of the list, the load balancer loops back and goes down the list again (sends the next request to the first listed server, the one after that to the second server, and so on),” according to NGINX — one of the popular web server and load balancer software. The other techniques told here utilize a bit different approach while choosing the webserver to direct the network request to, but they all help in finalizing the final web server from the list of web servers.

The Key Take-Aways

A single web server can serve only a limited number of web requests due to its limited hardware resources. High-traffic websites must set up data centers and web server farms. These sites must ensure that incoming requests are distributed among the list of web servers to avoid overwhelming a single server. There are various types of load balancing solutions, with DNS-based being a popular option. One or more load balancing techniques, such as round-robin, are often used for distributing client requests across a group of servers.

Originally posted 2021-08-16 17:07:35. Republished by Blog Post Promoter

Check Also

Ergonomics For Desk People: Tension On The Page, Not On Your Back

Most every person sitting at a desk feels like they’re not sitting properly at their …

Information Technology Blog

Accessibility Tools