Choose language

Apache Kafka: A Future-Fit foundation

In today's changing digital environment, it is crucial to manage the flow of data efficiently. At Visma Connect, we recognise the critical need for systems that handle information streams seamlessly and adapt to future challenges. In a previous blog post, we wrote about our preference for a processed-based choreography approach to managing information streams, where a solid backbone is essential for success. We thoroughly evaluated the many tools available and found that Apache Kafka stands out as the ideal foundation for our solutions.

This post explores how Kafka’s maturity, flexibility, scalability, and support infrastructure align perfectly with our mission to build secure, reliable, and future-proof systems. Whether deployed in public clouds or on-premises, Kafka's robust event processing capabilities make it a future-fit foundation for modern information management needs.


Maturity

Apache Kafka was first developed by LinkedIn's team to help them process a constant stream of new posts from millions of users. It is a mature, open-source platform that has earned its merits through active use in thousands, if not hundreds of thousands, of systems. It has become the de facto standard in event processing. 

Of course, there are many other systems with their own set of advanced features, but are they mature enough? If we start using technology, we want to be certain that it will still be around in 5-10 years. We don’t want to jump on a startup train where we aren’t sure if the technology will be developed further, both feature-wise and security-wise.

Consultancy

If we run into issues with a tool that we cannot handle ourselves, it’s important to be able to seek advice elsewhere. A spin-off of the original Kafka project is Confluence. This is a commercial party that has built various additional services on top of the open-source tool and offers consultancy services. If we were to choose instead a tool developed by a startup less than two years ago, we wouldn’t have the guarantee that we could get support or consultancy in ten years.

SaaS/On-premise

Visma Connect focuses on public cloud propositions. We think this is the way to go for systems in this era. Some customers, however, can have very specific requirements. For example, a country’s Department of Defence may have policies that require any systems they use to be deployed on their private cloud. Kafka allows our solutions to run on both public and private clouds.

Scalability

Elastic scalability is one of cloud computing's merits. In businesses where seasonality in data volume is common, choosing a system with elastic scalability can make all the difference in developing cost-effective software solutions. 

Serverless computing is yet another feature of public cloud computing. The name is a bit misleading - any software you need to run naturally has to run on a server. Instead, this refers to not having exclusive, 24/7 access to a server. If data volumes are low or are subject to seasonality, serverless could be more sustainable and less expensive. But it's important to look at the specifics of each use case - if non-functional requirements say processing times should be near real-time, then serverless technology probably isn’t the way to go.

AWS offers Kafka as a managed service. It’s also possible to run Kafka serverless. This is an ideal setup for smaller customers with low data volumes growing rapidly, as it provides an opportunity to upgrade as and when required.

Real-time processing

A system designed to process reports filed just once a year will have very different non-functional requirements compared to a system that facilitates chat functionality between parties. Low latency times, for example, would be a much higher priority for the chat system. The basic functionality, on the other hand, could be very similar - it’s all about getting data from A to B, with the option to add steps like verification and validation. 

We want to implement systems that offer a near real-time user experience and where latency is not a high-priority non-functional requirement. Kafka allows us to do just that while using the same technology.

Observability

Observability tools are the system's eyes and ears, allowing the system's throughput to be monitored at all times. The system will eventually overflow if producers produce more events than consumers can handle. Site reliance engineers should be able to monitor what’s happening in the system and proactively take measures to safeguard its proper workings. This is made possible by linking a system’s observability tooling to dashboards that show the data. Kafka makes this easier by working with topics, producers and consumers. 

Ticking all the boxes

While many tools excel in specific areas, only one met all our critical criteria for a future-ready, reliable, and scalable system: Apache Kafka. Apache Kafka not only checks all those boxes but also provides a comprehensive solution that ensures security, observability, and sustainability. Whether handling large-scale real-time data or supporting diverse deployment needs, Kafka’s proven maturity and flexibility make it the perfect partner for Visma Connect. We are confident that Kafka will continue to help us implement secure, reliable, sustainable, cost-effective, and maintainable systems today while evolving alongside us in the years to come.

Related blog posts