At what time does rebus unsubscribe? Is there anything that unsubscribes automatically after a period of time if a subscriber hasn't been running?
I have had some situations where it seems that when the subscriber service has been stopped for a period of time, the publisher doesn't keep sending messages their way. What might I be doing wrong?
Rebus unsubscribes when you bus.Unsubscribe<SomeMessage>().
If you experience that a subscriber suddenly no longer receives published messages, it is most likely because your publisher is "forgetting" the subscriptions somehow.
Did you start the publisher with the default in-mem subscription storage? Because that will not survive a restart of the publisher.
You will almost always be interested in having some way of actually persisting the subscriptions, using e.g. SQL Server to do it:
Configure.With(...)
.(...)
.Subscriptions(s => s.StoreInSqlServer(connectionString, "RebusSubscriptions")
.EnsureTableIsCreated())
.(...)
If you still experience what seems to be a randomly forgotten subscriber, would it by any chance be because you've renamed or moved the event class?
The subscription is stored in the publisher's subscription storage as a (eventType, subscriberInputQueue) tuple, but the actual storing of the eventType may vary depending on the chosen subscription storage. I can see that the XML subscription storage uses the type's assembly-qualified name as its key, whereas the SQL Server subscription storage uses the type's FullName as the key - iow the chosen subscription storage might have slightly different behavior (which I see is not optimal, but it's a consequence of the way the abstraction is designed).
When the publisher publishes, it will ask the subscription storage for the subscribers for a given event type, so it is this lookup that for some reason does not always return your subscriber's input queue.
Since the XML subscription storage uses assembly-qualified name, it will also be sensitive to version changes of your messages assembly.
I hope this can give you an indication on why you're experiencing dropped subscriptions :)
Related
We have a simple application, who upon every update of an entity sends out a notification to SNS(it could very well have been any other queuing system). Clients are listening to these notifications and they do a get of updated entity based on these notifications.
The problem we are facing is, when clients do a get, sometimes data is not replicated and we return 404 or sometimes stale data(even worse).
How can we mitigate this while sending notifications?
Here are Few strategies to mitigate this with pros and cons
Instead of sending notification from application send notification using database streams
For example dynamodb streams ans aws lambda. This pattern can be useful in the case of multiregion deployment as well. where all the subscriber, publisher will subscribe to their regional database streams. And also atomicity of sending message and writing to database is preserved. And we wont loose events in the case of regional failure.
Send delayed messages to your broker
Some borkers like activemq and sqs support this functionality, but SNS does not. A workaround for that could be writing to sqs queue which then writes to sns. This might be a good option when your database does not support streams.
Send special error code for retry-able gets
Since we know that eventual consistency is there we can return special error code to clients, so that they can retry based on this error code. The retry strategy should be exponential backoff. but this may mean giving away your problems to clients. Also we should have some sort of versioning in place.
Fetch from another region
If entity is not found in the same region application can go to another region or master database to fetch it. NOTE Don't do this. as it is an anti pattern. I am mentioning it here just for the sake of completion.
Send the full entity in message
If entities to be fetched by rest service is small and there are no security constrain around who can access what, we can send the full entity in message. This is ensure that client don't have to do explicit fetch of it every time a new message is arrived.
Before I get to my question, let me sketch out a sample set of microservices to illustrate my dilemma.
Scenario outline
Suppose I have 4 microservices:
An activation service where features supplied to our customers are (de)activated. A registration service where members can be added and changed. A secured key service that is able to generate secure keys (in a multi step process) for members to be used when communicating with them with the outside world. And a communication service that is used to communicate about our members with external vendors.
The secured key service may however only request secured keys if this is a feature that is activated. Additionally, the communication service may only communicate about members that have a secured key AND if the communication feature itself is activated.
Because they are microservices, each of the services has it's own datastore and is completely self sufficient. That is, any data that is required from the other microservices is duplicated locally and kept in sync by means of asynchronous messages from the other microservices.
The dilemma
I'm actually facing two main dilemma's. The first is (pretty obviously) data synchronization. When there are multiple data stores that need to be kept in sync you have to account for messages getting lost or processed out of order. But there are plenty of out of the box solutions for this and when all fails you could even fall back to some kind of ETL process to keep things in sync.
The main issue I'm facing however is the actions that need to be performed. In the above example the secured key service must perform an action when it either
Receives a message from the registration service for a new member when it already knows that the secured keys feature is active in the activation service
Receives a message from the activation service that the secured keys feature is now active when it already knows about members from the registration service
In both cases this means that a message from the external system must lead to both an update in the local copy of the data as well as some logic that needs to be processed.
The question
Now to the actual question :)
What is the recommended way to cope with either bugs or new insights when it comes to handling those messages? Suppose there is a bug in the message handler from the activation service. The handler does update the internal data structure, but it fails to detect that there are already registered members and thus never starts the secure key generation process. Alternatively it could be that there's no bug, but we decide that there is something else we want the handler to do.
The system will have no reason to resubmit or reprocess messages (as the message didn't fail), but there's no real way for us to re-trigger the behavior that's behind the message.
I hope it's clear what I'm asking (and I do apologize if it should be posted on any of the other 170 Stack... sites, I only really know of StackOverflow)
I don't know what is the recommended way, I know how this is done in DDD and maybe this can help you as DDD and microservices are friends.
What you have is a long-running/multi-step process that involves information from multiple microservices. In DDD this can be implemented using a Saga/Process manager. The Saga maintains a local state by subscribing to events from both the registration service and the activation service. As the events come, the Saga check to see if it has all the information it needs to generate secure keys by submitting a CreateSecureKey command. The events may come in any order and even can be duplicated but this is not a problem as the Saga can compensate for this.
In case of bugs or new features, you could create special scripts or other processes that search for a particular situation and handle it by submitting specific compensating commands, without reprocessing all the past events.
In case of new features you may even have to process old events that now are interesting for your business process. You do this in the same way, by querying the events source for the newly interesting old events and send them to the newly updated Saga. After that import process, you subscribe the Saga to these newly interesting events and the Saga continues to function as usual.
We're developing an agenda on our platform. We implemented a feature to sync with Google Agenda which works correctly except that it only works with public calendar and not when it's private.
We implement everything as Google provides and use AuthO2 protocol.
We are migrating to https and we hope that it will solve our issue.
Do you have any idea on the reason it's blocked when agenda is private?
You can implement synchronization by sending HTTP request:
GET https://www.googleapis.com/calendar/v3/calendars/calendarId/events
and adding path parameters and optional query parameters as shown in Events: list.
In addition to that, referring to Synchronize Resources Efficiently, you can keep data for all calendar collections in sync while saving bandwidth by using the "incremental synchronization".
As highlighted in the documentation:
A sync token is a piece of data exchanged between the server and the client, and has a critical role in the synchronization process.
As you may have noticed, sync token takes a major part in both stages in incremental synchronization. Make sure to store this syncToken for the next sync request. As discussed:
Initial full sync is performed once at the very beginning in order to fully synchronize the client’s state with the server’s state. The client will obtain a sync token that it needs to persist.
Incremental sync is performed repeatedly and updates the client with all the changes that happened ever since the previous sync. Each time, the client provides the previous sync token it obtained from the server and stores the new sync token from the response.
More information and examples on how to synchronize efficiently can be found in the given documentations.
Could you validate my approach for using Firebase to manage a user notification system?
Basically I want to have user specific channels as well as more general channels which hold notifications. These notifications would appear on an intranet if the user has not viewed them before.
The idea being a server side action will update Firebase endpoints using the REST API either for a specific user or broadcast to everyone. The specific user messages I can easily mark as read and therefore not show them again, its the general broadcast I am struggling slightly with.
I could add a flag(user ID) to the general broadcast to indicate its read but I am concerned about performance as the client would have to check historic broadcast messages for the existence of this flag. I could add a user id to create a new endpoint which should be quicker.
e.g. /notification/general/ - contains the message, this triggers the client which then checks to see if /users/USERID/MessageID exists if it doesnt display the message and create this end point.
Is there something I am missing or is that the best approach?
Are the messages always consumed in-order? If so you could have each client remember the ID of the last message it read in each public channel. You could then use "startAt" on the queue to limit it to only new messages.
If they're not consumed in order, then you'll need some way of storing data about which ones were read and which ones weren't. Perhaps you can have each message get sent out to everyone's personal queue, and then have each user remove read messages.
Since there are already undividual user messages, why not just deliver the broadcasts to everyone individually (think email) rather than trying to store a single copy and figure out who read it.
To reduce bulk, you could store the message content separately, and simply store the ids in a user's queue. Then when they are viewed, you flag them per-user without any additional complexity.
At 100k of users receiving 100 messages a day including broadcasts, with a standard firebase ID (around 20 characters), that comes out to 210,000,000 characters a year (i.e. nothing for a database, and probably still far less than the actual bulk of storing the message body), assuming they never expire and get deleted.
I have been recently looking into NServiceBus, as I thought messaging would be a good way to reduce dependencies between systems. However, one of the first things that struck me is that the message publisher and all subscribers have to share the message definition DLL. What would happen in this scenario?:
Say there is one central system that handles client data. Whenever a client record is changed, it publishes a message, containing name and address. This has ten subscribers, which update their local copy of the data on receipt of the message.
One day, requirements change and one of the subscribers need the clients phone number as well. The message, the publisher, and the affected subscriber are all updated to handle the phone number, and they are all recompiled and released.
Will all nine other subscribers continue unaffected? Will they carry on as normal with the old Message DLL, or will they all need to be updated with the new DLL, recompiled and released as well?
The NServiceBus architecture is designed to be resilient to message structure changes (especially where the changes involve adding information like in your scenario). See the Versioning Sample page on the NServiceBus site.
It is not the case that you can handle versioning in NSB like they outline in the Versioning Sample.
You can do this if you are implementing NSB in a send/receive scenario. In this instance even though the contract is a messages DLL, the same DLL version does not need to be shared between senders and receivers. This is because providing the XML on the wire will de-serialize cleanly on the receiver end all will be well.
However, this completely breaks down in a pub-sub scenario. In this case, there is a dependency on the exact same version of the messaging assembly being shared between publisher and subscribers. This means the version, public key token etc all need to be identical. The reason for this is the subscription mechanism.
When your subscriber starts up it will send a subscription message to the publisher, who will then enter the subscription in the subscription data store. This subscription is for messages originating in a specific assembly version.
If the publisher then updates it's version of the messages DLL and receives a message which it needs to publish, it will do a lookup against the subscriptions it holds and evaluates each one in turn. Because the subscription exists for a previous version of the messages assembly the evaluation process will ignore that subscription entry, and therefore no message will be sent to the subscriber.
You need to be aware of this hard-dependency in the pub-sub scenario.
Hope this helps.
Edit
As of version 3.x of NServiceBus as long as your messages assembly major version is shared between publisher and subscriber then pub-sub will work as normal.