What does Netifi broker improve over using direct RSocket application communication on a Kubernetes cluster?
Asked Answered
B

1

9

Let's suppose I have a Kubernetes cluster where I deploy Spring Boot applications that communicate using RSocket. In order to call each other they would use the Kubernetes service name, so we would be relying of that "registry" for discovery and routing.

On the other hand, Netify offers a Netifi broker that can be deployed on Kubernetes. If I understood well, this broker is meant to mediate the communication between applications, so those Spring Boot RSocket applications wouldn't communicate via their Kubernetes service names, but through the Netifi broker.

What are the advantages and disadvantages of each of the approaches?

Bazar answered 1/12, 2019 at 13:32 Comment(0)
U
10

Full Disclosure: I'm one of the co-founders of Netifi.

When deploying RSocket services with the Netifi broker the services would communicate via their Netifi service names and not rely on K8s service discovery.

The Netifi broker gives you a number of advantages including service discovery, predictive load-balancing, and dynamic routing of RSocket traffic. The load-balancing provided by the Netifi broker takes into account downstream latency and routes traffic to the least latent nodes in realtime. The service discovery is also very fast as it is not DNS based, but is gossiped via RSocket between the Netifi broker nodes.

The main advantages of deploying RSocket services in K8s with the Netifi broker are:

  • simpler K8s setup (not having to muck with load-balancers or dns service discovery)
  • more sophisticated load-balancing algorithms
  • ability to route traffic (RSocket at it's core is point-to-point)
  • easy bridging between K8s services and services deployed outside K8s.

Where we see the biggest bite point from our customers when it comes to K8s is actually making their services in K8s interact with their non-K8s services (bare metal, PCF, etc). With a brokered architecture like Netifi's this is made to be a very easy, secure, and performant way to bridge those gaps.

Edit (responding to question about resiliency):

The Netifi Broker has been designed from the ground up with clustering to prevent a single point of failure scenario. We typically encourage clients to deploy a minimum of 3 brokers in a production environment. The clustering is easy to setup and uses multiple discovery mechanisms. You can actually use K8s DNS for the brokers to find themselves to cluster and then use Netifi's service discovery for your services. In terms of the size of box required for the Netifi Broker it is actually quite small. The Netifi Broker is completely zero-copy and can run with very few resources. We have run brokers with significant load (500K rps) in less than 100MB of memory. That is of course extreme. Our internal brokers at Netifi run on dual-core machines with 2 or 4 GB of RAM very comfortably and that is the level of resources we recommend our customers allocate for their instances as well.

Utile answered 2/12, 2019 at 1:54 Comment(3)
Hi Greg, thanks for your detailed answer. I see the advantages os this approach now. However, isn’t it true that the Netifi broker becomes a single point of failure? Any recommendations to mitigate it? CPU/memory requirements, number of pods...Bazar
@Bazar - answered your question above in an edit as it was too big for commentsUtile
Excellent answer, thanks Greg, I’ll definitely give Netifi a tryBazar

© 2022 - 2024 — McMap. All rights reserved.