Well I have experience with both.
For Kafka Streams, achilles heel in my opinion is the datastore it uses, RockDB, which is necessary for KTables and internal State Stores. RockDB is great loading single Values but if you have iterate over things, after a Dataset size of 100 000 the performance degrades significantly. You can change the datastore to something but it is really not that well documented.
It also suffers from the disadvantages of the key/value databases, you can't only ask questions over primary key and there is not wildcard matching or anything and it has no means to implement CQRS pattern
Akka on the other hand is not that newbie friendly, specially if you don't have a grasp of stream processing concepts. But it is not coupled with single persistence of option, it has components to implement CQRS. And Akka developers made lots of brain storming to improve the performance Kafka for stream processing.
Another really important point, while some might say it is not relevant, Kafka Streams has no back pressure mechanism but Alpakka Kafka have it, which can be very critical in some production scenarios, you can convince yourself with this Netflix Blog.
In short, if you want to start quickly with stream processing, start with Kafka Streams but be ready to hit a wall and switch to Alpakka Streams.
If you need examples, I have two blogs about those topics, you can check it those, blog1 blog2