What is Kafka?
Apache Kafka is a distributed commit log for fast, fault-tolerant communication between producers and consumers using message based topics. Kafka provides the messaging backbone for building a new generation of distributed applications capable of handling billions of events and millions of transactions.
Why Apache Kafka on Heroku?
World Class Operations
Now you can consume Kafka as a service with Heroku’s world-class orchestration and thoughtfully tuned configurations that keep Kafka fast and robust. We distribute Kafka resources across network zones for fault-tolerance, and ensure your Kafka cluster is always available and addressable.
Elegant Developer Experience
Easy to use CLI and web tooling make Kafka simple to provision, configure and operate. Add topics, create partitions, manage log compaction, and monitor key metrics from the comfort of the CLI or Heroku Dashboard.
Seamless Integration with Apps
Run producers and consumers as Heroku apps for simple vertical and horizontal scalability. Config vars make it easy to securely connect to your Kafka cluster, so you can focus on your core logic.
How it Works
Kafka provides a powerful set of primitives for connecting your distributed application: messages, topics, partitions, producers, consumers, and log compaction.
Kafka is a message passing system, messages are events and can have keys.
A Kafka cluster is made up of brokers that run Kafka processes.
Topics are streams of messages of a particular category.
Partitions are append only, ordered logs of a topic’s messages. Messages have offsets denoting position in the partition. Kafka replicates partitions across the cluster for fault tolerance and message durability.
Producers are client processes that send messages to a broker on a topic and partition. Producers can use a partitioning function on keys to control message distribution.
Consumers read messages from topics' partitions on brokers, tracking the last offset read to coordinate and recover from failures. Consumers can be deployed in groups for scalability.
Log compaction keeps the most recent value for every key so clients can restore state.
We don’t have a DevOps team, so using Apache Kafka on Heroku means we don’t have to worry about setting up infrastructure, or configuring and tuning our Kafka instance to ensure performance. This helps us focus on making our products better.Jonathan Geggatt Sr. Platform Engineer, HotelTonight
Experience Apache Kafka on Heroku
Build data intensive apps
See it in action
See what Kafka on Heroku can do. Check out our recent demo.
Tutorials and Other Resources
- Kafka Stream Processing Demo
- Heroku Metrics: There and Back Again
- Powering the Heroku Platform API: A Distributed Systems Approach Using Streams and Apache Kafka
- Apache Kafka 0.10: Evaluating Performance in Distributed Systems
- Apache Kafka, Data Pipelines, and Functional Reactive Programming with Node.js
Apache Kafka can be used to stream billions of events per day — but do you know where to use it in your app architecture? Find out at our technical session. See a live demo and hear answers to questions from Heroku product experts.
Listen to our live podcast on Software Engineering Daily from October 25th, 2016
Apache Kafka is a durable, distributed message broker that’s a great choice for managing large volumes of inbound events, building data pipelines, and acting as the communication bus for microservices. In this live podcast we’ll talk about building the Apache Kafka on Heroku service, challenges we faced, and why we focused on Kafka in the first place!