What is pass-through load balancer? How is it different from proxy load balancer?
Asked Answered
U

2

31

Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ).

I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial?

Are there any other type of load balancers except pass-through and proxy?

Uvula answered 4/4, 2017 at 11:27 Comment(1)
Relevant link: blog.envoyproxy.io/…Deloisedelong
F
28

It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it: pass-though, direct server return(DSR), direct routing,...

We'll call it pass-through here.

Let me try to explain the thing:

Regarding other load balancer types there can't be a definitive list, here are a few examples:

As for the advantages of pass-through over other methods:

  • Some applications won't work or need to be adapted if the addresses on the IP packets is changing, for example the SIP protocol. See the Wikipedia for more on applications that don't play along well with NAT https://en.wikipedia.org/wiki/Network_address_translation#NAT_and_TCP/UDP.

    Here the advantage pass-through is that it does not change the source and destination IPs.

    Note that there is a trick for a load balancer working at a higher layer to keep the IPs: the load balancer spoofs the IP of the client when connecting to the backends. As of this writing no load balancing product uses this method in Compute Engine.

  • If you need more control over the TCP connection from the client, for example to tune the TCP parameters. This is an advantage of pass-through or NAT over TCP (or higher layer) proxy.

Fidelafidelas answered 10/1, 2018 at 17:7 Comment(0)
S
0

An important consideration when designing an application accessed by end users is load balancing. Load balancing takes user requests and distributes them across multiple instances of your application. This helps to keep your application from experiencing performance issues if there is a spike in user activity. Load balancing options available in Google Cloud can be divided into those that operate at layer 7 of the OSI model and those that operate at layer 4 of the stack. As a review, layer 7 is the the application layer of the protocol stack. It is where applications, or processes, share data with each other. It uses lower levels of the stack to pipe connections to other processes. The hypertext transfer protocol (http) and file transfer protocol (ftp) are examples of Layer 7 protocols. Layer 4 of the OSI model encapsulates host-to-host communication in both the Transport and Network levels. Google cloud offers both internal and external load balancers. The external load balancers include https, SSL, and TCP load balancers. Internal load balancers include TCP/UDP, http(s), and network pass-through load balancers. The http(s) load balancers live at Layer 7 of the OSI model. TCP/UDP, SSL and network load balancers reside at Layer 4 of the OSI model. In Google Cloud, load balancers can be be proxied or pass-through. Proxied load balancers terminate connections and proxy them to new connections internally. Pass-through load balancers pass the connections directly to the backends.

Stuart answered 1/3, 2024 at 13:32 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.