RealWorld HazelCast [closed]
Asked Answered
K

7

30

Does anyone have any real world experience with Hazelcast distributed data grid and execution product? How has it worked for you? It has an astonishingly simple API and functionality that seems almost to good to be true for such a simple to use tool. I have done some very simple apps and it seems to work as advertised so far. So here I am looking for the real world 'reality check'. Thank you.

Kike answered 22/11, 2010 at 22:21 Comment(0)
G
11

We've been using it in production since version 1.8+, using mainly the distributed locking feature. It works great, we've found a couple of workarounds/bugs, but those were fixed relatively fast.

With 1.8M locks per day we found no problems so far.

I recommend start using version 1.9.4.4.

Gyro answered 19/1, 2012 at 20:48 Comment(0)
S
9

There are still some issues still with its development,
http://code.google.com/p/hazelcast/issues/list
Generally, you can choose to either let it use its own multicast algorithm or specify your own ip's. We've tried it in a LAN environment and it works pretty well. Performance wise it's not bad but the monitoring tool didn't work very well as it failed to update most of the time. If you can live with the current issues then by all mean go for it. I would use it with caution but it's a great working tool IMHO.

Update: We've been using Hazelcast for a few months now and it's working very well. The settings are relatively easy to set up and with the new updates, are comprehensive enough to customize even small things like the number of threads allowed in read/write operations.

Sibby answered 21/12, 2010 at 22:35 Comment(0)
L
7

We are using Hazelcast (1.9.4.6 now) in production integrated with a complicated transactional service. It was added to alleviate immediate database throughput issues. We have discovered that we frequently have to stop it bringing down all transaction services for at least an hour. We are running clients in superclient mode because it is the only option that even remotely meets our performance requirements (about 4 times faster than native clients.) Unfortunately stopping a superclient node causes split brain issues and causes the grid to lose records, forcing a complete shutdown of services. We have been trying to make this product work for us for almost a full year now, and even paid to have 2 hazelcast reps flown in to help. They were unable to produce a solution, but were able to let us know that we were probably doing it wrong. In their opinion it should work better but it was pretty much a wasted trip.

At this point we are on the hook for over 6 figures per year in licensing fees and we are currently using about 5 times the resources to keep the grid alive and meet our performance needs than we would be using with a clustered and optimized database stack. This was absolutely the wrong decision for us.

This product is killing us off. Use with caution, sparingly, and only for simple services.

Labuan answered 10/2, 2012 at 17:13 Comment(3)
Did you resolve this? Did you isolate the problem, or move to another technology? What are the licensing fees you mentioned? The core of azelcast is free, I thought.Isochronous
The old what did you see jokeSaltatory
@james, given that this answer was given a long time ago, what is the current situation of Hazelcast. Were you able solve your issues with the newer releases or did you work with some other solution. Would be interesting to know.Witmer
S
2

If my own company and projects count as real world, here's my experience. I wanted to get as close to eliminating external (disk) storage in favor of limitless and persistent "RAM". For starters that eliminates CRUD plumbing which sometimes makes up to 90% of the so-called "middle tier". There are other benefits. Since RAM is your "database" you don't need any complex caches or HTTP session replication (which in turn eliminates ugly sticky session technique).

I believe RAM is the future and Hazelcast has everything to be an in-memory database: queries, transactions, etc. So I wrote a mini-framework abstracting it: to load data from the persistent storage (I can plugin anything that can store BLOBs - the fastest turned out to be MySQL). It is too long to explain why I didn't like Hazelcast's built-in persistence support. It's rather generic and rudimentary. They should remove it. It is not rocket science to implement your own distributed and optimized write-behind and write-through. Took me a week.

Everything was fine until I started performance-testing. Queries are slow - after all of the optimizations I did: indexes, Portable serialization, explicit comparators, etc. A simple "greater than" query on an indexed field takes 30 seconds on the set of 60K of 1K records (map entries). I believe Hazelcast team did everything they could. As much as I hate to say it, Java collections are still slow compared to super-optimized C++ code normal databases use. There are some open-source Java projects that address that. However at this time query persistence is unacceptable. It should be instant on a single local instance. It is an in-memory technology after all.

I switched to Mongo for the database, however left Hazelcast for shared runtime data - namely sessions. Once they improve query performance I'll switch back.

Scrimp answered 15/6, 2015 at 5:21 Comment(2)
I am evaluating Ignite (apacheignite.readme.io/docs/overview) now. It has the same features as Hazelcast - at least those I need. I'll let you know in a week.Scrimp
A simple "greater than" query on an indexed field takes 30 seconds on the set of 60K of 1K records (map entries). This data is so unrealistically wrong, that it should raise a flag during any decent performance analysis. It looks so horrible, that I would ask questions like: "Why so many people use it then?" / Why are so many performance tests on the net discussing millisecond latencies and 100k inserts per second thresholds?" In the end I would start questioning the validity of my own test.Extraterritorial
M
0

If you have alternatives to hazelcast maybe look at these first. We have it in running production mode and it is still quite buggy, just check out the open issues. However, the integration with Spring, Hibernate etc. is quite nice and the setup is really easy :)

Meretricious answered 30/10, 2011 at 23:40 Comment(0)
T
0

We use Hazelcast in our e-commerce application to make sure that our inventory is consistent.

We use extensive use of distributed locking to make sure SKU Items of inventory are modified in atomic way because there are hundred of nodes in our web application cluster that operates concurrently on these items.

Also, we use distributed map for caching purpose which are shared across all the nodes. Since scaling node in Hazelcast is so simple and it utilises all its CPU core, it gives added advantage over redis or any other caching framework.

Tantrum answered 12/7, 2017 at 11:46 Comment(0)
L
0

We are using Hazelcast from last 3 years in our e-commerce application to make sure availability (supply & demand) is consistent, atomic, available & scalable. We are using IMap (distributed map) to cache the data and Entry Processor for read & write operations to do fast in-memory operations on IMap without you having to worry about locks.

Levitus answered 24/8, 2017 at 17:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.