This has been the first time I could attend the Apache Kafka meetup in London. Previous meetings had me in Barcelona or flying. First realisation: it is a surprisingly crowded meetup! Clearly, everyone is using Kafka, even if it is not clear from the outside. Oh, and the food was pretty good, too. Thanks to the sponsor (which I have sadly forgotten).
I really enjoyed this one. The speaker was Jay Kreps, CEO and co-founder of Confluent (and author of I Heart Logs), so, basically, a Kafka top committer himself. He delivered the same presentation he gave in Reactive Summit 2016, but since I wasn’t in the former I could enjoy it this time. It was a hand-drawn presentation (I suspect using Paper for iOS), hitting all the supposedly good points of technical presentations (short slides, don’t read them, etc). It was also quite deep, explaining what/how and why we should use Kafka Streams for our real-time data pipelines.
Jay, presenting. This architecture needs some Kafka
The key takeaway is that if you are already using Kafka as your real-time data hose-and-bucket, you can reuse this cluster (and all its goodies: partitions, groups, offsets) for processing the data as it comes, withouth needing an additional framework (Flink, Spark, Storm… you know the tune). Also, if you’drather write Perl than Java and this is preventing you from using the KStreams API, behold! Even if they don’t have an official Scala API, wrapping it is extremely easy.
You can view (and download) the slides from here, and watch Jay in video from Reactive Summit here.