Event-sourcing: when (and not) should I use Message Queue? - asynchronous

I am building a project from scratch using event-sourcing with Java and Cassandra.
My apps we be based on microservices and in some use cases information will be processed asynchronously. I was wondering what part a Message Queue (such as Rabbit, Active MQ Artemis, Kafka, etc) would play to improve the technology stack in this environment and if I understand the scenarios if I won't use it.

I would start with separating messaging infrastructure like RabbitMQ from event streaming/storing/processing like Kafka. These are two different things made for two (or more) different purposes.
Concerning the event sourcing, you have to have a place where you must store events. This storage must be append-only and support fast reads of unstructured data based on an identity. One example of such persistence is the EventStore.
Event sourcing goes together with CQRS, which means you have to project your changes (event) to another store, which you can query. This is done by projecting events to that store, this is where events get processed to change the domain object state. It is important to understand that using message infrastructure for projections is generally a bad idea. This is due to the nature of messaging and two-phase commit issue.
If you look at how events get persisted, you can see that they get saved to the store as one transaction. If you then need to publish events, this will be another transaction. Since you are dealing with two different pieces of infrastructure, things can get broken.
The messaging issue as such is that messages are usually guaranteed to be delivered "at least once" and the order of messages is usually not guaranteed. Also, when your message consumer fails and NACKs the message, it will be redelivered but usually a bit later, again breaking the sequence.
The ordering and duplication concerns, whoever, do not apply to event streaming servers like Kafka. Also, the EventStore will guarantee once only event delivery in order if you use catch-up subscription.
In my experience, messages are used to send commands and to implement event-driven architecture to connect independent services in a reactive way. Event stores, at the other hand, are used to persist events and only events that get there are then projected to the query store and also get published to the message bus.

Make sure you are clear on the distinction between send(command) and publish(event). Udi Dahan touches on that topic in his essay on busses and brokers.
In most cases where you are event sourcing, you do not want to be reconstructing state from published events. If you need state, then query the technical authority/book of record for the history, and reconstruct the state from the history.
On the other hand, event driven activity off of a message queue should be fine. When a single event (plus the subscriber's state) has everything you need, then running off of the bus is fine.
In some cases, you might do both. For example, if you were updating cached views, you'd subscribe to various BobChanged events to know when your cached data was stale; to rebuild a stale view, you would reload a representation of the history and transform it into an updated view.

In the world of event-sourcing applications, message queues usually allow you to implement publish-subscribe pattern style of communication between producers and consumers. Also, they usually help you with delivery guarantees: which messages were delivered to which subscribers and which ones were not.
But they don't store all messages indefinitely. You need to have an event store to do any kind of event sourcing.
The question is not 'to queue or not to queue', but it is more like:
can this thing store huge volume of events indefinitely?
does it have publish-subscribe capabilities?
does it provide at-least-once delivery guarantees?
So, you should use something like Kafka or EventStore to have all that out-of-the-box. Alternatively, you can combine event store with message queue manually, but this is going to be more involved.

Related

Axon FW 4.6 comes with Dead Letter Queue support, but is it possible to still have rollback support in case of exception?

we're looking into using the new feature of Axon, dead letter queue(DLQ).
In our previous application (axon 4.5x) we have a lot of eventhandlers updating projections. Our default is to rethrow exceptions when they occur, which will trigger a rollback for the database updates. Perhaps not the best practice to rely on this behaviour (because it can not rollback everything, eg sending an email from event can not be reverted of course)
This behaviour seems to get lost when introducing DLQ in our applications, which has big impact on our current application flow (projections are updated when they previously weren't). This makes upgrading not that easy.
Is it possible to still get the old behaviour(transaction rolled back in case of exceptions) together with DLQ processing?
What we tried was building a test application to test the new DLQ features. While playing around all looks fine in case of exceptions (they were moved to dlq) but the projections still got updated (not rolled back as before)
We throw an exception after the .save() of the projection simulating a database failure to see if events involved (we have multiple eventhandlers for an event updating projections) got rolled back.
You need to choose here, #davince. Storing a dead-letter in the dead-letter queue similarly requires a database transaction.
To ensure the token still progresses and the dead letter is entered, the framework uses the existing transaction.
Furthermore, in practical terms event handling was successful.
Hence, a rollback for some of the parts wouldn't be feasible.
The best way to deal with this, as is mentioned in the Reference Guide, is to make your event handlers idempotent:
Before configuring a SequencedDeadLetterQueue it is vital to validate whether your event handling functions are idempotent.
As a processing group consists of several Event Handling Components
(as explained in the intro of this chapter), some handlers may succeed
in event handling while others will not. As a configured dead-letter
queue does not stall event handling, a failure in one Event Handling
Component does not cause a rollback for other event handlers.
Furthermore, as the dead-letter support is on the processing group
level, dead-letter processing will invoke all event handlers for that
event within the processing group.
Thus, if your event handlers are not idempotent, processing letters may result in undesired side effects.
Hence, we strongly recommend making your event handlers idempotent when using the dead-letter queue.
The principle of exactly-once delivery is no longer guaranteed; at-least-once delivery is the reality to cope with.
I understand this requires some rework on your end, #davince. Although the feature is powerful, it comes with a certain level of responsibility before you use it.
I hope the above clarifies this for you.
Added, I'd like to point out that the version upgrade in itself does not require you to use the dead-letter queue. Hence, this change shouldn't impose any strains for updating to the latest release.
Update 1
Sometimes you need to think about an issue a bit longer.. I was just wondering the following things about your setup. Perhaps I can help out on that front:
What storage mechanism do you use to store projections in?
Where are you storing your tokens?
Where are you planning to store your dead-letters?
How are you invoking the storage layer from your event handlers?
Update 2
Thanks for sharing that you're using PostgreSQL for your projections, tokens, and dead letters. And that you're using JPA entities for storage.
It gives more certainty about your setup, as it may impact how your system would react in case of a rollback.
However, as it's rather vanilla/regular, the shared comment from the Reference Guide still applies.
This, sadly enough, means some work on your end, #davince. I hope the road forward to start using the SequencedDeadLetterQueue from Axon Framework is clear.
By the way, if you ever have recommendations on how the framework or the documentation may be improved, be sure to file issues in GitHub here and here, respectively.

Doing explicit replays in axon 3.x

I have been trying out axon recently and read lots of stuff. From what I understand, the concept of event sourcing says that system state is rebuilt from the event store, while CQRS updates a view model that can be queried with the command side side is not queryable.
I will like to implement rebuilding of state myself each time the UI requests for some information. I've implemented the event processor and seen its replay capabilities. However I can't seem to find any evidence that axon allows for replays triggered on user demand. Therefore I'm asking here if its possible to personally trigger a replay to build a DAO required by the UI or axon only supports the CQRS way of doing it.
When axon does a replay (when token is deleted), does it read from the snapshot table (if implemented), then from the event table or does it always start from the beginning of time?
Dropping the token manually is currently the way to go if you want your query side to be rebuilt.
So, if you'd like to trigger a replay, you'd have to add a component which does that for you.
This might be a tricky process as well, since you'd probably not want any other parties trying to access your view(s) while they're rebuilding.
Take note that for this you do need to put your event handling components in a TrackingEventProcessor, otherwise it will have no recollection of how much of the events it has handled or not in the form of a TrackingToken.
The snapshot table is meant for your aggregates to ease the loading process. When you've set up axon to use the (Caching)EventSourcingRepository, it will first load an aggregate based on a snapshot and then source the remaining events.
So, to answer your second question: when you drop the token, it will read from the domain event entry table from the beginning of time.
Hope this helps!
Update
For those looking for replay support in Axon Framework, it is recommended to look at this page of the Reference Guide. It describes the replay support constructed for Streaming Event Processsors (like the TrackingEventProcessor).

Event Driven Architecture - Service Contract Design

I'm having difficulty conceptualising a requirement I have into something that will fit into our nascent SOA/EDA
We have a component I'll call the Data Downloader. This is a facade for an external data provider that has both high latency and a cost associated with every request. I want to take this component and create a re-usable service out of it with a clear contract definition. It is up to me to decide how that contract should work, however its responsibilities are two-fold:
Maintain the parameter list (called a Download Definition) for an upcoming scheduled download
Manage the technical details of the communication to the external service
Basically, it manages the 'how' of the communication. The 'what' and the 'when' are the responsibilities of two other components:
The 'what' is managed by 'Clients' who are responsible for
determining the parameters for the download.
The 'when' is managed by a dedicated scheduling component. Because of the cost associated with the downloads we'd like to batch the requests intraday.
Hopefully this sequence diagram explains the responsibilities of the services:
Because each of the responsibilities are split out in three different components, we get all sorts of potential race conditions with async messaging. For instance when the Scheduler tells the Downloader to do its work, because the 'Append to Download Definition' command is asynchronous, there is no guarantee that the pending requests from Client A have actually been serviced. But this all screams high-coupling to me; why should the Scheduler necessarily know about any 'prerequisite' client requests that need to have been actioned before it can invoke a download?
Some potential solutions we've toyed with:
Make the 'Append to Download Definition' command a blocking request/response operation. But this then breaks the perf. and scalability benefits of having an EDA
Build something in the Downloader to ensure that it only runs when there are no pending commands in its incoming request queue. But that then introduces a dependency on the underlying messaging infrastructure which I don't like either.
Makes me think I'm thinking about this problem in a completely backward way. Or is this just a classic case of someone trying to fit a synchronous RPC requirement into an async event-driven architecture?
The thing I like most about EDA and SOA, is that it almost completely eliminates the notion of race condition. As long as your events are associated with some association key (e.g. downloadId), the problem you describe can be addressed with several solutions of different complexities - depending on your needs. I'm not sure I totally understand the described use-case but I will try my best
Out of the top of my head:
DataDownloader maintains a list of received Download Definitions and a list of triggered downloads. When a definition is received it is checked against the triggers list to see if the associated download has already been triggered, and if it was, execute the download. When a TriggerDownloadCommand is recieved, the definitions list is checked against a definition with the associated downloadId.
For more complex situation, consider using the Saga pattern, which is implemented by some 3rd party messaging infrastructures. With some simple configuration, it will handle both messages, and initiate the actual download when the required condition is satisfied. This is more appropriate for distributed systems, where an in-memory collection is out of the question.
You can also configure your scheduler (or the trigger command handler) to retry when an error is signaled (e.g. by an exception), in order to avoid that race condition, and ultimately give up after a specified timeout.
Does this help?

NServiceBus, when are too many message used?

When considering a service in NServiceBus at what point do you start questioning how many messages handled by a service is too much and start to break these into a new service?
Consider the following: I have a sales service which can currently be broken into a few distinct business components, these are sales order validation, sales order processing, purchase order validation and purchase order processing.
There are currently about 20 message handlers and 2 sagas used within this service. My concern is that during high volume traffic from my website this can cause an initial spike in the messages to jump into the hundreds. Considering that the messages need to be processed in the order they are taken off the queue this can cause a delay for the last in the queue ( depending on what processing each message does).
When separating concerns within a service into smaller business components I find this makes things a little easier. Sure, it's a logical separation, but it seems to provide a layer of clarity and understanding. To me it seems it seems an easier option to do this than creating new services where in the end the more services I have the more maintenance I need to do.
Does anyone have any similar concerns to this?
I think you have actually answered you own question :)
As soon as the message volume reaches a point where the lag becomes an issue you could look to instance your endpoint. You do not necessarily need to reduce the number of handlers. You could simply install the service a number of times and have specific message types sent to the relevant endpoint by mapping.
So it becomes a matter of a simple instance installation and some config changes. So you can then either split messages on sending so that messages from a particular source end up on a particular endpoint (maybe priority) or on message type.
I happened to do the same thing on a previous project (not using NServiecBus though) where we needed document conversion messages coming from the UI to be processed ASAP. We simply installed the conversion service again with its own set of queues and changed the UI configuration to send the conversion messages to the new endpoint. The background conversion messages were still going to the previous endpoint. So here the source determined the separation.

Does the concept of shared sessions exist in ASP.NET?

I am working on a web application (ASP.NET) game that would consist of a single page, and on that page, there would be a game board akin to Monopoly. I am trying to determine what the best architectural approach would be. The main requirements I have identified thus far are:
Up to six users share a single game state object.
The users need to keep (relatively) up to date on the current state of the game, i.e. whose turn it is, what did the active user just roll, how much money does each other user have, etc.
I have thought about keeping the game state in a database, but it seems like overkill to keep updating the database when a game state object (say, in a cache) could be kept up to date. For example, the flow might go like this:
Receive request for data from a user.
Look up data in database. Create object from that data.
Verify user has permissions to perform request based on the game's state (i.e. make sure it's really their turn or have enough money to buy that property).
Update the game object.
Write the game object back to the database.
Repeat for every single request.
Consider that a single server would be serving several concurrent games.
I have thought about using AJAX to make requests to an an ASP.NET page.
I have thought about using AJAX requests to a web service using silverlight.
I have thought about using WCF duplex channels in silverlight.
I can't figure out what the best approach is. All seem to have their drawbacks. Does anyone out there have experience with this sort of thing and care to share those experiences? Feel free to ask your own questions if I am being too ambiguous! Thanks.
Update: Does anyone have any suggestions for how to implement this connection to the server based on the three options I mention above?
You could use the ASP.Net Cache or the Application state to store the game object since these are shared between users. The cache would probably be the best place since objects can be removed from it to save memory.
If you store the game object in cache using a unique key you can then store the key in each visitors Session and use this to retrieve the shared game object. If the cache has been cleared you will recreate the object from the database.
While updating a database seems like overkill, it has advantages when it comes time to scale up, as you can have multiple webheads talking to one backend.
A larger concern is how you communicate the game state to the clients. While a full update of the game state from time to time ensures that any changes are caught and all clients remain in synchronization even if they miss a message, gamestate is often quite large.
Consider as well that usually you want gamestate messages to trigger animations or other display updates to portray the action (for example, of a piece moves, it shouldn't just appear at the destination in most cases... it should move across the board).
Because of that, one solution that combines the best of both worlds is to keep a database that collects all of the actions performed in a table, with sequential IDs. When a client requests an update, it can give all the actions after the last one it knew about, and the client can "act out" the moves. This means even if an request fails, it can simply retry the request and none of the actions will be lost.
The server can then maintain an internal view of the gamestate as well, from the same data. It can also reject illegal actions and prevent them from entering the game action table (and thus prevent other clients from being incorrectly updated).
Finally, because the server does have the "one true" gamestate, the clients can periodically check against that (which will allow you to find errors in your client or server code). Because the server database should be considered the primary, you can retransmit the entire gamestate to any client that gets incorrect state, so minor client errors won't (potentially) ruin the experience (except perhaps a pause while the state is downloaded).
Why don't you just create an application level object to store your details. See Application State and Global Variables in ASP.NET for details. You can use the sessionID to act as a key for the data for each player.
You could also use the Cache to do the same thing using a long time out. This does have the advantage that older data could be flushed from the Cache after a period of time ie 6 hours or whatever.

Resources