Haproxy in front of varnish or the other way round?
Asked Answered
C

4

45

I can imagine two setups:

Load-balance then cache

                          +-- Cache server #1 (varnish) -- App server #1
                         /
Load Balancer (haproxy)-+---- Cache server #2 (varnish) -- App server #2
                         \
                          +-- Cache server #3 (varnish) -- App server #3

Cache then load-balance

                                                       +-- App server #1
                                                      /
Cache Server (varnish) --- Load Balancer (haproxy) --+---- App server #2
                                                      \
                                                       +-- App server #3

The problem with the first setup is that there are multiple caches, which wastes a lot of memory and makes invalidating cache more complicated.

The problem with the second setup is that there might be a performance hit and two single points of failure (varnish and haproxy) instead of just one (haproxy)?

I'm tempted to go with the second setup because both haproxy and varnish are supposed to be fast and stable: what's your opinion?

Credible answered 16/3, 2013 at 10:34 Comment(0)
S
38

I built a similar setup a few years back for a busy web application (only I did it with Squid instead of Varnish), and it worked out well.

I would recommend using your first setup (HAProxy -> Varnish) with two modifications:

  1. Add a secondary HAProxy server using keepalived and a shared virtual IP
  2. Use the balance uri load balancing algorithm to optimize cache hits

Pros:

  • Peace of mind with HAProxy (x2) and Varnish (x3) redundancy
  • Better hit rate efficiency on Varnish with HAProxy URI load balancing option
  • Better performance from the cache servers as they don't need to keep as much in memory
  • Invalidating cache is easier since the same URI will go to the same server every time

Cons:

  • URI balancing works well, but if a cache server goes down, your backend servers will get hit as the other cache server(s) that pick up the slack from the updated URI balancing hash will need to re-retrieve the cached data. Maybe not a big con, but I did have to keep that in mind for my system.
Sherard answered 3/6, 2013 at 16:37 Comment(2)
But it would seem that URI balancing would mean that: 1) I can't do any other balancing (e.g. workload based balancing) for those items that are not cached andDickman
But it would seem that URI balancing would mean that I can't do any other balancing (e.g. workload based balancing) for those items that are not cached? Would it make sense to forward any non-cached requests back into another pair of HAProxys for this?Dickman
D
11

Both have pros and cons. More in the blog article below, including the configuration for both HAProxy and Varnish: http://blog.exceliance.fr/2012/08/25/haproxy-varnish-and-the-single-hostname-website/

Baptiste

Defect answered 16/3, 2013 at 13:47 Comment(2)
This answer would be more useful if you could include the relevant information from the article, rather than just linking to it.Antidisestablishmentarianism
@Baptiste: the author of the blog article (you?) suggests an interesting architecture. But I'm not sure about his definition of "dynamic content". For example, a user's home page may contain 90% content shared with every other user (banner, footer, ads, today's news...), and only 10% really personalized content (most of which probably does not change every second). Therefore, it would be nice to use Varnish's ESI feature to have the common cachable part of the user's home page actually be cached. And can't Varnish cache user's personal but fairly static data? Thanks for your advice.Credible
M
2

Why not use 2 LB, the first LB can use balance uri option, the second LB can use strategy of your choice (workload, round robin)

          +-- Cache Server #1 --+                +-- App server #1
         /                       \              /
LB #1 --+                         + -- LB #2 --+---- App server #2
         \                       /              \
          +-- Cache Server #2 --+                +-- App server #3

Scale where you need, just how much you need. If you find that you're not bottlenecked at Cache, simply remove LB#1 and place only one Cache server in front

Maines answered 17/1, 2017 at 19:53 Comment(0)
S
1

Of course the first one !

With HAProxy configured for URI based balancing. (You need to distribute your Application User Session if you have ones in opposite than IP balancing mode).

Especially if you need HTTPS endpoint, since Varnish doesnt talk HTTPS.

Sanction answered 21/7, 2014 at 11:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.