Thrift, Avro, Protocolbuffers - Are they all dead?
Asked Answered
Y

4

53

Working on a pet project (cassandra, spark, hadoop, kafka) I need a data serialization framework. Checking out the common three frameworks - namely Thrift, Avro and Protocolbuffers - I noticed most of them seem to be dead-alive having 2 minor releases a year at most.

This leaves me with two assumptions:

  • They are as complete as such a framework should be and just rest in maintenance mode as long as no new features are needed
  • There is no reason to exist for such framework - not being obvious to me why. If so, what alternatives are out there?

If anyone could give me a hint to my assumptions, any input is welcome.

Yeasty answered 5/12, 2016 at 6:26 Comment(1)
Do you want your serialization format to change rapidly and often?Abreu
B
32

The advantage of Thrift compared to Protobuf is that Thrift offers a complete RPC and serialization framework. Plus Thrift supports about 20+ target languages and that number is still growing. We are about to include .NET core and there will be Rust support in the not-so-far future.

The fact that there have been not that many Thrift releases in the last months is surely something that needs to be addressed, and we are fully aware of it. On the other hand, the overall stability of the codebase is quite good, so one may do a Github fork and cut a branch on its own from current master as well - of course with the usual quality measures.

The main difference between Avro and Thrift is that Thrift is statically typed, while Avro uses a more dynamic approach. In most cases a static approach fits the needs quite well, in that case Thrift lets you benefit from the better performance of generated code. If that is not the case, Avro might be more suitable.

Also it is worth mentioning that besides Thrift, Protobuf and Avro there are some more solutions on the market, such as Capt'n'proto or BOLT.

Birmingham answered 5/12, 2016 at 15:57 Comment(11)
"Thrift offers a complete RPC and serialization framework." -- So does Protobuf: grpc.ioAuer
But that's an add-on, not core protobuf. With Thrift you get that OOTB.Birmingham
I disagree. gRPC and Protobuf were very much designed and built together. The fact that they are properly modular and separable -- so that you can avoid the bloat of the RPC system if you don't need it -- is a feature, not a bug.Auer
"gRPC and Protobuf were very much designed and built together" -- pbuf ~ 2008 (or older) vs. gRPC ~ 2015. Well, at least it was in the same century.Birmingham
Yes, I'm quite aware that Protobuf was open sourced in 2008 since I was the one who led that project, thanks. Protobuf was first conceived in ~2001 and the RPC system in ~2004. Unfortunately when I open sourced Protobuf I did not have the resources to prepare the RPC implementation for public release, so that didn't actually happen until later. Nevertheless the fact stands that they were developed together.Auer
@Birmingham I suspect the "pet project [using] (cassandra, spark, hadoop, kafka)" and many like them have no need nor want for an RPC spec OOTB.Abreu
@pdxleif: "strongly typed" != "typed". Nobody claimed there aren't any types at all.Birmingham
@Birmingham Could you explain what you mean by the "The main difference between Avro and Thrift is that Thrift is statically typed," paragraph? This is the description language for Thrift: thrift.apache.org/docs/idl Are you saying that is somehow fundamentally different than what is expressed in the Avro schema? Avro uses the type information to achieve more efficient data encoding than Thrift. "Untagged data: Since the schema is present when data is read, considerably less type information need be encoded with data, resulting in smaller serialization size."Minni
You can generate code from the schema: avro.apache.org/docs/1.8.2/… Or you can generate the schema from the types in your code. Are you thinking of the GenericRecord interface? Offering a dynamic view of a static object does not mean the underlying object is static. You can always convert the fields in an object into a Map<String, Object> - you could offer the same for Thrift-generated objects, if you wish.Minni
"Thrift supports about 20+ target languages and that number is still growing" maybe they should fix the XSD generation in the first place as this never worked correctly.Tophole
@FreshMike: Feel free to file a ticket including a test case. (Or send a patch, if you already have one at hand.)Birmingham
A
43

Protocol Buffers is a very mature framework, having been first introduced nearly 15 years ago at Google. It's certainly not dead: Nearly every service inside Google uses it. But after so much usage, there probably isn't much that needs to change at this point. In fact, they did a major release (3.0) this year, but the release was as much about removing features as adding them.

Protobuf's associated RPC system, gRPC, is relatively new and has had much more activity recently. (However, it is based on Google's internal RPC system which has seen some 12 years of development.)

I don't know as much about Thrift or Avro but they have been around a while too.

Auer answered 6/12, 2016 at 7:4 Comment(0)
B
32

The advantage of Thrift compared to Protobuf is that Thrift offers a complete RPC and serialization framework. Plus Thrift supports about 20+ target languages and that number is still growing. We are about to include .NET core and there will be Rust support in the not-so-far future.

The fact that there have been not that many Thrift releases in the last months is surely something that needs to be addressed, and we are fully aware of it. On the other hand, the overall stability of the codebase is quite good, so one may do a Github fork and cut a branch on its own from current master as well - of course with the usual quality measures.

The main difference between Avro and Thrift is that Thrift is statically typed, while Avro uses a more dynamic approach. In most cases a static approach fits the needs quite well, in that case Thrift lets you benefit from the better performance of generated code. If that is not the case, Avro might be more suitable.

Also it is worth mentioning that besides Thrift, Protobuf and Avro there are some more solutions on the market, such as Capt'n'proto or BOLT.

Birmingham answered 5/12, 2016 at 15:57 Comment(11)
"Thrift offers a complete RPC and serialization framework." -- So does Protobuf: grpc.ioAuer
But that's an add-on, not core protobuf. With Thrift you get that OOTB.Birmingham
I disagree. gRPC and Protobuf were very much designed and built together. The fact that they are properly modular and separable -- so that you can avoid the bloat of the RPC system if you don't need it -- is a feature, not a bug.Auer
"gRPC and Protobuf were very much designed and built together" -- pbuf ~ 2008 (or older) vs. gRPC ~ 2015. Well, at least it was in the same century.Birmingham
Yes, I'm quite aware that Protobuf was open sourced in 2008 since I was the one who led that project, thanks. Protobuf was first conceived in ~2001 and the RPC system in ~2004. Unfortunately when I open sourced Protobuf I did not have the resources to prepare the RPC implementation for public release, so that didn't actually happen until later. Nevertheless the fact stands that they were developed together.Auer
@Birmingham I suspect the "pet project [using] (cassandra, spark, hadoop, kafka)" and many like them have no need nor want for an RPC spec OOTB.Abreu
@pdxleif: "strongly typed" != "typed". Nobody claimed there aren't any types at all.Birmingham
@Birmingham Could you explain what you mean by the "The main difference between Avro and Thrift is that Thrift is statically typed," paragraph? This is the description language for Thrift: thrift.apache.org/docs/idl Are you saying that is somehow fundamentally different than what is expressed in the Avro schema? Avro uses the type information to achieve more efficient data encoding than Thrift. "Untagged data: Since the schema is present when data is read, considerably less type information need be encoded with data, resulting in smaller serialization size."Minni
You can generate code from the schema: avro.apache.org/docs/1.8.2/… Or you can generate the schema from the types in your code. Are you thinking of the GenericRecord interface? Offering a dynamic view of a static object does not mean the underlying object is static. You can always convert the fields in an object into a Map<String, Object> - you could offer the same for Thrift-generated objects, if you wish.Minni
"Thrift supports about 20+ target languages and that number is still growing" maybe they should fix the XSD generation in the first place as this never worked correctly.Tophole
@FreshMike: Feel free to file a ticket including a test case. (Or send a patch, if you already have one at hand.)Birmingham
I
2

Concerning thrift: as far as I am aware of it is alive and kicking. We use it for serialization and internal API's where I work at and it works fine for that.

Missing things like connection multiplexing and more user-friendly clients have been added through projects such as Twitter's Finagle.

Though I would characterize our use of it as semi-intensive only (ie, we don't look at performance first: it should be easy to use and bug-free before anything else) we did not run into any issue so far.

So, regarding thrift, I'd say it falls into your first category.[*]

Protocolbuffers is an alternative for thrift's serialization part, but it does not provide the RPC toolbox thrift offers.

I'm not aware of any other project that blends RPC and serialization into such a simple to use and complete single package.

[*]Anyway, once you start using it and see all the benefits, it's hard to put it into your second category :)

Inhabitant answered 5/12, 2016 at 14:8 Comment(0)
M
1

They are all very much in use at plenty of places, so I'd say your first assumption. I don't know what your expectation of a release schedule is, but they seem normal to me for a library of that size and maturity. Heck, Avro 1.8.0 came out at the start of 2016, and most things still use Avro 1.7.7 (e.g. Spark, Hadoop). https://avro.apache.org/releases.html

Minni answered 13/4, 2018 at 19:36 Comment(1)
Does Avro let you serialize/deserialize your existing classes? The "Getting Started" has two scenarios only: 1. Model classes are generated from a schema, 2. No code generation but the only classes ser/des are GenericRecord. My scenario is not covered: hundreds of existing classes serialized using JDK but want something faster. It would seem like Arvo wants to regenerate all my classes from scratch which won't work because they aren't anemic - i.e. a fully OO model. I also read a post where someone had issues with inherited classes - seems strange for a Java framework to have such issues.Beelzebub

© 2022 - 2024 — McMap. All rights reserved.