FlatBuffers vs. Protobuf
Asked Answered
B

3

10

My question is if FlatBuffers is much faster than Protobuf, why isn't it more widely used compared to Protobuf?

It used to be an experimental thing but it seems to be mature enough now but isn't widely used yet. It seems people mostly use Flatbuffers for mobile apps/games. Why is that the case?

Blackbeard answered 1/2, 2019 at 11:33 Comment(1)
It was originally designed for games, but plenty of large companies (including Google, Facebook) use it for more serious things too. You have to imagine that FlatBuffers is barely 5 years old at this point, whereas Protobuf is close to 20. It takes time for companies to adopt new technologies. When they adopt it, it is usually for new projects, because replacing Protobuf on an existing system is usually way too hard.Stilbite
S
8

There are several reasons for this:

  1. As you mentioned, flatbuffers are being used mostly in apps and games. This is because this is their best application. Since flatbuffers are faster, their main application would be to use them in low latency applications. And it is gaining popularity in that sector.

  2. When the existing technology works fine, people/organizations, in general, don't want to invest time and resources for a newer technology. I have personally worked on a proof of concept involving flatbuffers for a big organizations. There are many hurdles before the final decision of using the technology is taken. Legacy systems are still using xml and json, let alone thinking about protobufs.

Seamus answered 11/10, 2019 at 10:10 Comment(0)
L
7

I think there are multiple factors:

  1. People usually don't bother to switch if the old technology works (unless it becomes a bottleneck and must be optimized).
  2. Flatbuffers indeed optimizes very aggressively for speed, but at the cost of bloated data size. Protobuf's optimizations are more balanced between speed and size. The same message can usually be much smaller in size if serialized with protobuf than with flatbuffers. If your message is meant to be sent by network, you will need to take this into account. (Don't talk about compression -- compression will probably cost you 100 times more CPU cycles than you have saved by switching from protobuf to flatbuffers.)
  3. Flatbuffers's API is difficult to use. It is not easy at all to build and serialize a flatbuffers table, especially if it has array members and/or nested tables. In contrast, it took me literally just one or two minutes to learn how to set fields of a protobuf message and serialize it to a byte array.
Lachman answered 5/10, 2021 at 15:44 Comment(0)
A
6

I have only ever used Protobuf at my job. I think the answer to this question is the same for the adoption curve of all new technologies. "Why should we switch and have to invest in training and accept the new inherent risk of bugs if what we are using works fine". And I have also discovered that there is a very small percentage of developers who spend a lot of time learning about the latest and greatest tools. Most find something that works and just keep using that until they are forced to change either from vulnerabilities or a performance requirement.

Apraxia answered 1/2, 2019 at 11:41 Comment(5)
I understand your point but the performance difference is huge. If your app is comprised of several latency-critical microservices, why not use something that is much faster?Blackbeard
Serialization formats are very "sticky" (or even "viral") since so much of your code and data (of multiple programs across potentially many servers etc) starts depending on it. "switching" from Protobuf to FlatBuffers for an existing system requires changing all of that at the same time, which is usually too crazy a software engineering exercise to undertake.Stilbite
Just in case you're not aware, the IDL, wire format, APIs etc for these two systems are all incompatible. They mostly have to be incompatible for FlatBuffers to realize its new speed gains.Stilbite
@Aardappel, I understand the cost of switching to a new framework is high, but the benefit of this is also huge. Companies like Google or Facebook that provide latency-sensitive services gain a lot by doing this. Is there a more fundamental reason discouraging them to switch?Blackbeard
No, that is the reason. The engineering effort for Google's internal services would be tremendous (I personally assisted many teams looking into this). And some services are switching, it just happens slowly, piece by piece, mainly for new services or relatively isolated services where gains can be manifested more quickly.Stilbite

© 2022 - 2024 — McMap. All rights reserved.