20 Best Practices for Working With Apache Kafka at Scale - Online Free Computer Tutorials.

'Software Development, Games Development, Mobile Development, iOS Development, Android Development, Window Phone Development. Dot Net, Window Services,WCF Services, Web Services, MVC, MySQL, SQL Server and Oracle Tutorials, Articles and their Resources

Sunday, August 19, 2018

20 Best Practices for Working With Apache Kafka at Scale

Apache Kafka is a widely popular distributed streaming platform that thousands of companies like New Relic, Uber, and Square use to build scalable, high-throughput, and reliable real-time streaming systems. For example, the production Kafka cluster at New Relic processes more than 15 million messages per second for an aggregate data rate approaching 1 Tbps. Kafka has gained popularity with application developers and data management experts because it greatly simplifies working with data streams. But Kafka can get complex at scale. A high-throughput publish-subscribe (pub/sub) pattern with automated data retention limits doesn't do you much good if your consumers are unable to keep up with your data stream and messages disappear before they're ever seen. Likewise, you won't get much sleep if the systems hosting the data stream can't scale to meet demand or are otherwise unreliable.


I guess you came to this post by searching similar kind of issues in any of the search engine and hope that this resolved your problem. If you find this tips useful, just drop a line below and share the link to others and who knows they might find it useful too.

Stay tuned to my blogtwitter or facebook to read more articles, tutorials, news, tips & tricks on various technology fields. Also Subscribe to our Newsletter with your Email ID to keep you updated on latest posts. We will send newsletter to your registered email address. We will not share your email address to anybody as we respect privacy.


This article is related to

big data,scalability,kafka,kafka architecture

No comments:

Post a Comment