SINGLE CONNECT
KNOW HOW
What is Load Balancing
6 min
what is load balancing date 20 02 2020 author kron product team how to use load balancer and in which cases it is used 1 what is load balancing? load balancing is an important component of highly available (high availability) infrastructures that are widely used to improve the performance and reliability of websites, applications, databases, and other services by deploying workload to multiple servers with this process, users of the system are efficiently deployed between servers that can be called a server group or server pool, either equally or within the specified rules systems that perform these load balancing operations between both application and database servers are called "load balancer" on a system where load balancing is not used, users connect directly to a single web server on which the domain name is running on, whether they want to access a web service, such as "domainame com" as a result, users will not be able to access the website in the event of a problem that may occur on this server in addition, if many users try to access the site simultaneously, there may be a slowdown in load times and eventually access interruptions because overload cannot be met over a single server this "single point of failure" state can be eliminated with at least one additional server and load balancer to add to the system architecture typically, servers running behind load balancer will have the same content, so users encounter the same output no matter which server they provide 2\ which traffic types can be balanced with load balancing? balancing rules can be created for load balancers in 4 basic traffic types http requests that come in standard http balancing are routed based on standard http techniques load balancer sets x forwarded for, x forwarded proto, and x forwarded port titles to inform the back end system about the original request https the same process is operated with http balancing processes the only difference is the encryption processes in https operations these encryption processes can be performed in two different ways in the first method, encryption is maintained up to back end with ssl migration in the second method, the encryption and decryption load is taken over each server and loaded into load balancer in this structure, both less load strain on servers and costs can be reduced this process, in which ssl is built on a load balancer, is called ssl offload tcp tcp traffic may be balanced for applications that are not http or https for example, traffic to a set of databases can be balanced by spreading to all servers udp recently, some load balancers have added load balancing support for basic internet protocols that use udp, such as dns and syslogd these balancing rules ensure that after the protocol and port on the load balancer are defined, they match the protocol and connection that the load balancer will use to route traffic in back end 3\ how does load balancing work? how do load balancers select the back end server to route traffic to? load balancers use a 2 factor combination to forward requests first, they make sure that the servers they can choose can respond to the requests in accordance with the requests it then uses pre configured rules to choose between servers that can respond (healthy pool) 4\ compliance checks (health checks) load balancing systems must transmit load balancers traffic only to back end servers that are considered healthy if a server fails in eligibility check and therefore fails to respond to requests, it is automatically removed from the pool and traffic will not be transmitted to this server until it responds to compliance control again 5\ load balancing algorithms the load balancing algorithms used are algorithms that determine which backend server traffic is transmitted to here's how we can sort the most commonly used algorithms round robin round robin are algorithms where servers are selected sequentially and traffic is shared with this sort in this algorithm, the load balancer selects the first server in the list for the first request and proceed with the next server in list for the for the next request by the end of the list, the selection continues to scroll down, starting again from the top in the least connection least connection algorithm, the load balancer selects the server with the least connection this algorithm is recommended when traffic results in longer sessions source (ip hash) in the source algorithm, the load balancer uses the client's ip address to determine which server will receive the request this method ensures that a specific user is constantly connecting to the same server random assignment random assignment, which is least organized from all load balancing methods, does exactly what it says random assigns each workload to a server (server pool) on a group of servers the theory behind the random assignment sounds more complicated than it sounds in probability theory, the great numbers act says that as a sample size grows, the middle result in a sample set will eventually match the average result when applied here, random assignment of a workload to a server in the pool means that each server in the pool will handle approximately equal workloads, even if workloads are not initially equal 6\ why should load balancing must be used? load balancing can maximize accessibility and server continuity this ensures that the server system is always ready and running for users the user experience also shows improvements as there will be no delays and access disruptions even in periodic traffic increases the risk of "single point of failure" is eliminated because users will be redirected to the most appropriate application/database resources and application/database optimization will be provided