Kafka consumer synchronization behavior - asynchronous

I am currently exploring kafka as a beginner for a simple problem.
There will one Producer pushing message to one Topic but there will
be n number of Consumer of spark application massage the data from
kafka and insert into database (each consumer inserts to different
table).
Is there a possibility that consumers will go out of sync (like some part of the consumer goes down for quite some time), then
one or more consumer will not process the message and insert to table
?
assuming the code is always correct, no exception will arise when
massaging the data. It is important that every message is processed
only once.
My question is that does Kafka handles this part for us or do we have to write some other code to make sure this does not happen.

You can group consumers (see group.id config) and that grouped consumers split topic's partitions among themselves. Once a consumer drops, another consumers from the group will take over partitions read by dropped one.
However, there may be some problems: when consumer read a partition it commit offset back to Kafka and if consumer dropped after it processed received data but before commit offset, other consumers will start read from the latest available offset. Fortunately, you can manage strategy of how offset is committed (see consumer's settings enable.auto.commit, auto.offset.reset etc)
Kafka and Spark Streaming guide provide some explanations and possible strategies of how to manage offsets.

By design Kafka decouples the producer and the consumer. Consumer will read as fast as they can - and consumers can produce as fast as they can.
Consumers can be organized into "consumer groups" and you can set it up so that multiple consumers can read from a single group as well set it up so that an individual consumer reads from its own group.
If you have 1 consumer to 1 group you (depending on your acknowledgement strategy) should be able to ensure each message is read only once (per consumer).
Otherwise if you want multiple consumer reading from a single group - same thing - but the message is read once by a one of n consumers.

Related

Redis streams - free struck messages in a consumer group without claiming

Lets say, there are messages in a Redis consumer group that has not been processed for N seconds. I am trying to understand if its possible to free them and put them back for other members of the consumer group to see it. I don't want to claim/process these struck messages. I just want to make them accessible to other active members of the consumer group. Is this possible?
From what I have understood from the documents, options mentioned are XAUTOCLAIM or use a combination of XPENDING and XCLAIM and neither of these are meeting my requirements.
Essentially, I am trying to create a standalone process that can act as monitor and make those messages visible to active consumers in the consumer group and I am planning to use this standalone process to perform similar activity for multiple consumer groups (around 30). So I don't want this standalone process to be taking other actions.
Please suggest how this can be designed.
Thanks!
Pending messages are removed from the Redis' PEL only when they are acknowledged: this is by design and allows to scale the message re-distribution process to each individual consumer and to avoid the single point of failure condition of having a single monitoring process like the one you described.
So, in short, what you are looking for can't be done and I would suggest to consider using XAUTOCLAIM or XPENDING / XCLAIM into your consumer processes instead.

Best way to consume multi-service consumption model

We have multiple background services(workers) having containerization, which will consume multiple Kafka topics, in order to maintain the chronological order and the integrity of data. What should be the best possible way for the consumption, should we use one consumer per topic or multiple topics per consumer.
consume multiple Kafka topics... maintain the chronological order
This simply isn't possible from a consumer client, regardless of the number of them you have. At least, not without actively sorting data as you're consuming it into an in-memory data structure (i.e. not parallelized or distributed)
You could write your data to a database first (using Kafka Connect, ideally, instead of your own .NET services), then write your apps to query the database, sorting by timestamp, instead of reading from Kafka directly.

Event-sourcing: when (and not) should I use Message Queue?

I am building a project from scratch using event-sourcing with Java and Cassandra.
My apps we be based on microservices and in some use cases information will be processed asynchronously. I was wondering what part a Message Queue (such as Rabbit, Active MQ Artemis, Kafka, etc) would play to improve the technology stack in this environment and if I understand the scenarios if I won't use it.
I would start with separating messaging infrastructure like RabbitMQ from event streaming/storing/processing like Kafka. These are two different things made for two (or more) different purposes.
Concerning the event sourcing, you have to have a place where you must store events. This storage must be append-only and support fast reads of unstructured data based on an identity. One example of such persistence is the EventStore.
Event sourcing goes together with CQRS, which means you have to project your changes (event) to another store, which you can query. This is done by projecting events to that store, this is where events get processed to change the domain object state. It is important to understand that using message infrastructure for projections is generally a bad idea. This is due to the nature of messaging and two-phase commit issue.
If you look at how events get persisted, you can see that they get saved to the store as one transaction. If you then need to publish events, this will be another transaction. Since you are dealing with two different pieces of infrastructure, things can get broken.
The messaging issue as such is that messages are usually guaranteed to be delivered "at least once" and the order of messages is usually not guaranteed. Also, when your message consumer fails and NACKs the message, it will be redelivered but usually a bit later, again breaking the sequence.
The ordering and duplication concerns, whoever, do not apply to event streaming servers like Kafka. Also, the EventStore will guarantee once only event delivery in order if you use catch-up subscription.
In my experience, messages are used to send commands and to implement event-driven architecture to connect independent services in a reactive way. Event stores, at the other hand, are used to persist events and only events that get there are then projected to the query store and also get published to the message bus.
Make sure you are clear on the distinction between send(command) and publish(event). Udi Dahan touches on that topic in his essay on busses and brokers.
In most cases where you are event sourcing, you do not want to be reconstructing state from published events. If you need state, then query the technical authority/book of record for the history, and reconstruct the state from the history.
On the other hand, event driven activity off of a message queue should be fine. When a single event (plus the subscriber's state) has everything you need, then running off of the bus is fine.
In some cases, you might do both. For example, if you were updating cached views, you'd subscribe to various BobChanged events to know when your cached data was stale; to rebuild a stale view, you would reload a representation of the history and transform it into an updated view.
In the world of event-sourcing applications, message queues usually allow you to implement publish-subscribe pattern style of communication between producers and consumers. Also, they usually help you with delivery guarantees: which messages were delivered to which subscribers and which ones were not.
But they don't store all messages indefinitely. You need to have an event store to do any kind of event sourcing.
The question is not 'to queue or not to queue', but it is more like:
can this thing store huge volume of events indefinitely?
does it have publish-subscribe capabilities?
does it provide at-least-once delivery guarantees?
So, you should use something like Kafka or EventStore to have all that out-of-the-box. Alternatively, you can combine event store with message queue manually, but this is going to be more involved.

Ensure In Process Records are Unique ActiveMQ

I'm working on a system where clients enter data into a program and the save action posts a message to activemq for more time intensive processing.
We are running into rare occasions where a record will be updated by a client twice in a row and a consumer on that activemq queue will process the two records at the same time. I'm looking for a way to ensure that messages containing records with the same identity are processed in-order and only one at a time. To be clear if a record with ID 1, 1, and 2 (in that order) are sent to activemq, 1 would process, then 2 (if 1 was still in process) and finally 1.
Another requirement, (due to volume) requires that the consumer be multi-threaded, so there may be 16 threads accessing that queue. This would have to be taken into consideration.
So if you have multiple threads reading that queue and you want the solution to be close to ActiveMQ you have to think about how you scale related to order concerns.
If you have multiple consumers, they may operate at different speed and you can never be sure which consumer goes before the other. The only way is to have a single consumer (you can still achieve High Availability by using exclusive-consumers).
You can, however, segment the load in other ways. How depends a lot on your application. If you can create, say 16 "worker" queues (or whatever your max consumer count would be) and distribute load to these queues while guarantee that requests from a single user always come to the same "worker queue", message order will remain per user.
If you have no good way to divide users into groups, simply take the userID mod MAX_CONSUMER_THREADS as a simple solution.
There may be better ways to deal with this problem in the consumer logic itself. Like keeping track of the sequence number and postpone updates that are out of order (scheduled delay can be used for that).

Kafka - Dynamic / Arbitrary Partitioning

I'm in the process of building a consumer service for a Kafka topic. Each message contains a url to which my service will make an http request. Each message / url is completely independent from other messages / urls.
The problem I'm worried about is how to handle long-running requests. It's possible for some http requests to take 50+ minutes before a response is returned. During that time, I do not want to hold up any other messages.
What is the best way to parallelize this operation?
I know that Kafka's approach to parallelism is to create partitions. However, from what I've read, it seems that you need to define the number of partitions up front when I really want an infinite or dynamic number of partitions (ideally each message gets its own partition created on the fly)
As an example, let's say I create 1,000 partitions. If 1,001+ messages are produced to my topic, the first 1,000 requests will be made but every message after that will be queued up until the previous request in that partition finishes.
I've thought about making the http requests asynchronous but then I seem to run into a problem when determining what offset to commit.
For instance, on a single partition I can have a consumer read the first message and make an async request. It provides a callback function which commits that offset to Kafka. While that request is waiting, my consumer reads the next message and makes another async request. If that request finishes before the first it will commit that offset. Now, what happens if the first request fails for some reason or my consumer process dies? If I've already committed a higher offset, it sounds like this means my first message will never get reprocessed, which is not what I want.
I'm clearly missing something when it comes to long-running, asynchronous message processing using Kafka. Has anyone experienced a similar issue or have thoughts on how to best solve this? Thanks in advance for taking the time to read this.
You should look at Apache Storm for the processing portion of your consumer and leave the message storage and retrieval to Kafka. What you've described is a very common use case in Big Data (although the 50+ minute thing is a bit extreme). In short, you'll have a small number of partitions for your topic and let Storm stream processing scale the number of components ("bolts" in Storm-speak) that would actual make the http requests. A single spout (the kind of storm component that reads data from an external source) could read the messages from the Kafka topic and stream them to the processing bolts.
I've posted an open source example of how to write a Storm/Kafka application on github.
Some follow-on thoughts to this answer:
1) While I think Storm is the correct platform approach to take, there's no reason you couldn't roll your own by writing a Runnable that performs the http call and then write some more code to make a single Kafka consumer read messages and process them with multiply-threaded instances of your runnable. The management code required is a bit interesting, but probably easier to write than what it takes to learn Storm from scratch. So you'd scale by adding more instances of the Runnable on more threads.
2) Whether you use Storm or your own multi-threaded solution, you'll still have the problem of how to manage the offset in Kafka. The short answer there is that you'll have to do your own complex offset management. Not only will you have to persist the offset of the last message you read from Kafka, but you'll have to persist and manage the list of in-flight messages currently being processed. In this way, if your app goes down, you know what messages were being processed and you can retrieve and re-process them when you start back up. The base Kafka offset persistence doesn't support this more complex need, but it's only there as a convenience for the simpler use cases anyway. You can persist your offsets info anywhere you like (Zookeeper, file system or any data base).

Resources