We are trying to use Akka Streams with Alpakka Kafka to consume a stream of events in a service. For handling event processing errors we are using Kafka autocommit and more than one queue. For example, if we have the topic user_created
, which we want to consume from a products service, we also create user_created_for_products_failed
and user_created_for_products_dead_letter
. These two extra topics are coupled to a specific Kafka consumer group. If an event fails to be processed, it goes to the failed queue, where we try to consume again in five minutes--if it fails again it goes to dead letters.
On deployment we want to ensure that we don't lose events. So we are trying to stop the stream before stopping the application. As I said, we are using autocommit, but all of these events that are "flying" are not processed yet. Once the stream and application are stopped, we can deploy the new code and start the application again.
After reading the documentation, we have seen the KillSwitch
feature. The problem that we are seeing in it is that the shutdown
method returns Unit
instead Future[Unit]
as we expect. We are not sure that we won't lose events using it, because in tests it looks like it goes too fast to be working properly.
As a workaround, we create an ActorSystem
for each stream and use the terminate
method (which returns a Future[Terminate]
). The problem with this solution is that we don't think that creating an ActorSystem
per stream will scale well, and terminate
takes a lot of time to resolve (in tests it takes up to one minute to shut down).
Have you faced a problem like this? Is there a faster way (compared to ActorSystem.terminate
) to stop a stream and ensure that all the events that the Source
has emitted have been processed?