Synchronising transactions between database and Kafka producer

I’d suggest to use a slightly altered variant of approach 2.

Write into your database only, but in addition to the actual table writes, also write “events” into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you’d simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.

Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they’ll reflect the database state) without the need for distributed transactions, and the business level event semantics you’re after.

(Disclaimer: I’m the lead of Debezium; funnily enough I’m just in the process of writing a blog post discussing this approach in more detail)

Here are the posts

https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/

https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/

Leave a Comment