Granular domain events - .net-core

Initially we were using Domain events to handle communications with external systems. For instance, every time a user was updating his phone number OR his name we raise a PhoneNumberUpdated AND a NameUpdated event. These are then caught by handlers processed and sent to other systems.
public void SetName(Name name)
{
if (Name == name) return;
(...)
RaiseEvent(new NameUpdated(Id, name));
}
public void SetPhoneNumber(PhoneNumber number, PhoneNumberType type)
{
RaiseEvent(new PhoneNumberUpdated());
}
It works great as long as we do not need to "aggregate" events. For example, we got a new requirement asking us to to send one single email whenever a user updates his name and/or his phone number. With the current structure, our handlers would be notified multiples times (one time for each event raised) and this would result in multiple emails sent.
Making our events more generic don't seem to be a good solution. But then how would we aggregate several events raised within one transaction?
Thx
Seb

I believe your new requirement is a separate concern from your actual domain. Your domain generates events that describe what has happened. User notification, on the other hand, is a projection of that stream of events into email form. Just like you would keep your read model requirements separate from your domain, you should keep this separate as well.
A simple solution would be to capture the events you care about into a table and then once a day, on a schedule, and send one email per aggregate.

Related

Streaming multiple events of different types using Axon

I am working on building streaming APIs for client/server communication using Axon and ServerSentEvents and not sure if it is possible to stream and identify multiple different events using Axon query update emitter and subscription query.
I am using Axon QueryUpdateEmitter.emit to emit the events from a projection based on different events. Emitter is emitting in projection whereas subscription query is taking place in the REST API that is supposed to stream the server sent events to client.
For example,
I want to emit 3 different events for a use case which creates, updates and deletes an entity.
I am wondering if we can emit different types of data from different events but still combine in one stream, i.e. send actual object upon entity create and update in the emitter but, since I don’t have any entity/data to emit in case of delete, I thinking whether to send a simple message for delete?
I also want a way to specify the type of event while emitting so when ServerSentEvent is build from subscription query, I can specify the type/action (for ex, differentiate between create or update event) along with data.
Main idea is to emit different events and add them in one stream despite knowing all events may not return exactly same data (create, update vs. delete) as part of one subscription query and to be able to accurately identify the event and specify in the stream of ServerSentEvents with appropriate event type.
Any ideas on how I can achieve this?
Here's how I am emitting an event upon creation using QueryUpdateEmitter:
#EventHandler
public void on(LibraryCreatedEvent event, #Timestamp Instant timestamp) {
final LibrarySummaryEntity librarySummary = mapper.createdEventToLibrarySummaryEntity(event, timestamp);
repository.save(librarySummary);
log.debug("On {}: Saved the first summary of the library named {}", event.getClass().getSimpleName(), event.getName());
queryUpdateEmitter.emit(
AllLibrarySummariesQuery.class,
query -> true,
librarySummary
);
log.debug("emitted library summary: {}", librarySummary.getId());
}
Since I need to distinguish between create and update so I tried using GenericSubscriptionQueryUpdateMessage.asUpdateMessage upon update event and added some metadata along with it but not sure if that is in the right direction as I am not sure how to retrieve that information during subscription query.
Map<String, String> map = new HashMap();
map.put(“Book Updated”, event.getLibraryId());
queryUpdateEmitter.emit(AllLibrarySummariesQuery.class,query → true,GenericSubscriptionQueryUpdateMessage.asUpdateMessage(librarySummary).withMetaData(map));
Here's how I am creating subscription query:
SubscriptionQueryResult<List<LibrarySummaryEntity>, LibrarySummaryEntity> result = queryGateway.subscriptionQuery(new AllLibrarySummariesQuery(),ResponseTypes.multipleInstancesOf(LibrarySummaryEntity.class),ResponseTypes.instanceOf(LibrarySummaryEntity.class));
And the part where I am building server sent event:
(.event is where I want to specify the type of event - create/update/delete and send the applicable data accordingly)
Flux<ServerSentEvent<LibrarySummaryResponseDto>> sseStream = result.initialResult()
.flatMapMany(Flux::fromIterable).map(value -> mapper.libraryEntityToResponseDto(value))
.concatWith((streamingTimeout == -1)? result.updates().map(value -> mapper.libraryEntityToResponseDto(value)): result.updates().take(Duration.ofMinutes(streamingTimeout)).map(value -> mapper.libraryEntityToResponseDto(value)))
.log()
.map(created -> ServerSentEvent.<LibrarySummaryResponseDto>builder()
.id(created.getId())
.event("library creation")
.data(created).build())
.doOnComplete(() -> {log.info("streaming completed");})
.doFinally(signal -> result.close());
As long as the object you return matches the expected type when making the subscription query, you should be good!
Note that this means you will have to make a response object that can fit your scenarios. Whether response is something you'd emit as the update (through the QueryUpdateEmitter) or a map operation from where you return the subscription query, is a different question, though.
Ideally, you'd decouple your internal messages from what you send outward, like with SSE. To move to a more specific solution, you could benefit from having a Flux response type. You can simply attach any mapping operations to adjust the responses emitted by the QueryUpdateEmitter to your desired SSE format.
Concluding, the short answer is "yes you can," as long as the emitted response object matches the expected update type when dispatching the subscription query on the QueryGateway.

Axon Partialy replay, how do i get a TrackingToken for the startPosition for the replay?

I want my Axon replay events, not all but partially.
A full replay is up and running but when i want a partially replay i need a TrackingToken startPosition for the method resetTokens(), my problem is how to get this token for the partial replay?
I tried with GapAwareTracingToken but this does not work.
public void resetTokensWithRestartIndexFor(String trackingEventProcessorName, Long restartIndex) {
eventProcessingConfiguration
.eventProcessorByProcessingGroup(trackingEventProcessorName, TrackingEventProcessor.class)
.filter(trackingEventProcessor -> !trackingEventProcessor.isReplaying())
.ifPresent(trackingEventProcessor -> {
// shutdown this streaming processor
trackingEventProcessor.shutDown();
// reset the tokens to prepare the processor with start index for replay
trackingEventProcessor.resetTokens(GapAwareTrackingToken.newInstance(restartIndex - 1, Collections.emptySortedSet()));
// start the processor to initiate the replay
trackingEventProcessor.start();
});
}
When i use the GapAwareTrackingToken then i get the exception:
[] - Resolved [java.lang.IllegalArgumentException: Incompatible token type provided.]
I see that there is also a GlobalSequenceTrackingToken i can use, but i don't see any documentatieon about when these can/should be used.
The main "challenge" when doing a partial reset, is that you need to be able to tell where to reset to. In Axon, the position in a stream is defined with a TrackingToken.
The source that you read from will provide you with such a token with each event that it provides. However, when you're doing a reset, you probably didn't store the relevant token while you were consuming those events.
You can also create tokens using any StreamableMessageSource. Generally, this is your Event Store, but if you read from other sources, it could be something else, too.
The StreamableMessageSource provides 4 methods to create a token:
createHeadToken - the position at the most recent edge of the stream, where only new events will be read
createTailToken - the position at the very beginning of the stream, allowing you to replay all events.
createTokenAt(Instant) - the most recent position in the stream that will return all events created on or after the given Instant. Note that some events may still have a timestamp earlier than this timestamp, as event creation and event storage isn't guaranteed to be the same.
createTokenSince(Duration) - similar to createTokenAt, but accepting an amount of time to go back.
So in your case, createTokenAt should do the trick.

How to get specified message from Azure Service Bus Topic and then delete it from Topic?

I’m writing functionality for receiving messages from Azure Service Bus Topic and delete the specified message from Topic. Before deleting that message, I need to send that message to other Topic.
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: WorkOrderNumber:{message.MessageId} SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
Console.WriteLine("Enter the WorkOrder Number you want to delete:");
string WorkOrderNubmer = Console.ReadLine();
if (message.MessageId == WorkOrderNubmer)
{
//TODO:Post message into other topic(Priority) then delete from this current topic.
var status=await SendMessageToBus(message);
if (status == true)
{
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
Console.WriteLine($"Successfully deleted your message from Topic:{NormalTopicName}-WorkOrderNumber:" + message.MessageId);
}
else
{
Console.WriteLine($"Failed to send message to PriorityTopic:{PriorityTopicName}-WorkOrderNumber:" + message.MessageId);
}
}
else
{
Console.WriteLine($"Failed to delete your message from Topic:{NormalTopicName}-WorkOrderNumber:" + WorkOrderNubmer);
// Complete the message so that it is not received again.
// This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
// Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
// If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
// to avoid unnecessary exceptions.
}
}
My issue with this approach is:
It’s not scalable; what if the message is the 50th in the collection? We’d have to iterate through 49 times and mark i.e deleted.
It’s a long-running process.
To avoid these problems, I want to get the specified message from the queue based on Index or sequence number then I can delete that from the topic.
So, can anyone suggest me how to resolve this problem?
So if I understand your questions and comments correctly you are trying to do something like this:
Incoming messages come into either a standard topic or priority
topic.
Some process checks messages in the standard topic and
"moves" them to the priority topic based on some criteria by
deleting them from the standard topic and adding them to the
priority topic.
Messages are processed as normal.
As Sean noted, step 2 simply won't work. Service Bus is a first=in-first-out-ish system where a consumer simply picks up the next available message. You can sort through a queue by pulling out all the messages and abandoning/completing them based on specific criteria, but scaling is a problem. In addition, you can think of each topic subscription as its own separate queue- removing a message form one subscription does not remove it from any of the other subscriptions.
What I would suggest instead of trying to pull out everything from the topics and then putting back the ones you want to keep, add a sorting queue in front of the two topics. If you don't need to sort the high priority messages you could put this sorting process in front of the standard priority topic only.
This is how the process would work:
Incoming messages are added to a sorting queue Note that this is a single queue, not a topic. At this point in the process we want to ensure there is only one copy of each message.
A sorting process moves messages from the sorting queue into either the standard or priority queue as is appropriate. Using something like Azure Functions you can scale this process fairly easily.
Messages are processed from the topics as normal.

Entity Framework - Should I edit an object in a function, or after a function completes

I am coding a MVC 5 internet application, where I retrieve many Account objects that need emails sent to, then I send the emails. After the emails have been sent, I need to update a DateTime field in each Account object to store a value to show that the email has been sent.
Here is my code:
public async Task SendDailyExpirationEmails(int dayInterval)
{
IEnumerable<Account> freeTrialAccounts = GetFreeTrialAccountsForSendDailyExpirationEmails(dayInterval).ToList();
IEnumerable<Account> paidServiceAccounts = GetPaidServiceAccountsForSendDailyExpirationEmails(dayInterval).ToList();
await SendFreeTrialSubscriptionExpirationEmails(freeTrialAccounts);
await SendPaidSubscriptionExpirationEmails(paidServiceAccounts);
}
The SendEmail functions, for both the freeTrialAccounts and paidServiceAccounts, use a ForEach Loop to loop through each Account in the IEnumerable.
My question is this:
Should I update the DateTime field after both the SendEmail functions have been completed or within the SendEmail functions?
Is there a common coding practice for this situation?
Thanks in advance.
To maintain the precision of the DateTime value so that it's as correct as possible while reducing database calls, you will want to make record of it as soon as the email has sent, but wait to persist the information until your email process has completed.
I would say have a class property keep track of when each email was sent and then once all emails have been sent, make your call(s) to the database to update the sent date/time(s).
That said if you have some other job/task/application that relies on that information as soon as possible, then you will need to persist the data as soon as the email is sent. Otherwise I don't see a problem with delaying it.

Viewstate in a .ashx Handler?

I've got a handler (list.ashx for example) that has a method that retrieves a large dataset, then grabs only the records that will be shown on any given "page" of data. We are allowing the users to do sorting on these results. So, on any given page run, I will be retrieving a dataset that I just got a few seconds/minutes ago, but reordering them, or showing the next page of data, etc.
My point is that my dataset really hasn't changed. Normally, the dataset would be stuck into the viewstate of a page, but since I'm using a handler, I don't have that convenience. At least I don't think so.
So, what is a common way to store the viewstate associated with a current user's given page when using a handler? Is there a way to take the dataset, encode it somehow and send that back to the user, and then on the next call, pass it back and then rehydrate a dataset from those bits?
I don't think Session would be a good place to store it since we might have 1000 users all viewing different datasets of different data, and that could bring the server to its knees. At least I think so.
Does anyone have any experience with this kind of situation, and can you give me any advice?
In this situation I would use a cache with some type of user and query info as the key. The reason being is you say it is a large dataset. Right there is something you don't want to be pushing up and down the pipe constantly. Remember your server still has to received the data if it is in ViewState and handle it. I would do something like this which would cache it for a specific user and have a short expiry:
public DataSet GetSomeData(string user, string query, string sort)
{
// You could make the key just based on the query params but figured
// you would want the user in there as well.
// You could user just the user if you want to limit it to one cached item
// per user too.
string key = string.Format("{0}:{1}", user, query);
DataSet ds = HttpContext.Current.Cache[key] as DataSet;
if (ds == null)
{
// Need to reload or get the data
ds = LoadMyData(query);
// Now store it and make the expiry short so it doesn't bog up your server
// needlessly... worst case you have to retrieve it again because the data
// has expired.
HttpContext.Current.Cache.Insert(key, ds, null,
DateTime.UtcNow.AddMinutes(yourTimeout),
System.Web.Caching.Cache.NoSlidingExpiration);
}
// Perform the sort or leave as default sorting and return
return (string.IsNullOrEmpty(sort) ? ds : sortSortMyDataSet(ds, sort));
}
When you say 1000's of users, does that mean concurrent users? If your expiration time was 1 minute how many concurrent users would make that call in a minute and require sorting. I think offloading the data to something like similar to ViewState is just trading some cache memory for bandwidth and processing load of larget requests back and forth. The less you have to transmit back and forth the better in my opinion.
Why don't you implement a server side caching?
A I understand, you're retrieving a large amount of data and then returns only necessary records from this data to different clients. So you could use HttpContext.Current.Cache property for this.
E.g. a property which encapsulates a data retrieving logic (gets from the original data store with the first request, then puts to cache and gets from cache with every next request) could be used. In this case all the necessary data manipulations (paging, etc.) may be done much more quicker than retrieving a large amount of data with the each request.
In the case when clients have different data sources (mean each client have its own data source) the solution above may also be implemented. I suppose each client has at least identifier, so you could use different caches for different clients (client identifier as a part of cache key).
The best you could do is "grow your own" by including the serialized data set in the body of the request to the ASHX handler. Your handler would then check to see if the request does indeed have a body by checking Request.ContentLength and then reading from Request.InputStream, and if it does serializing that body back into the data set instead of reading from your database.

Resources