NServiceBus: Sharing Message DLLs - soa

I have been recently looking into NServiceBus, as I thought messaging would be a good way to reduce dependencies between systems. However, one of the first things that struck me is that the message publisher and all subscribers have to share the message definition DLL. What would happen in this scenario?:
Say there is one central system that handles client data. Whenever a client record is changed, it publishes a message, containing name and address. This has ten subscribers, which update their local copy of the data on receipt of the message.
One day, requirements change and one of the subscribers need the clients phone number as well. The message, the publisher, and the affected subscriber are all updated to handle the phone number, and they are all recompiled and released.
Will all nine other subscribers continue unaffected? Will they carry on as normal with the old Message DLL, or will they all need to be updated with the new DLL, recompiled and released as well?

The NServiceBus architecture is designed to be resilient to message structure changes (especially where the changes involve adding information like in your scenario). See the Versioning Sample page on the NServiceBus site.

It is not the case that you can handle versioning in NSB like they outline in the Versioning Sample.
You can do this if you are implementing NSB in a send/receive scenario. In this instance even though the contract is a messages DLL, the same DLL version does not need to be shared between senders and receivers. This is because providing the XML on the wire will de-serialize cleanly on the receiver end all will be well.
However, this completely breaks down in a pub-sub scenario. In this case, there is a dependency on the exact same version of the messaging assembly being shared between publisher and subscribers. This means the version, public key token etc all need to be identical. The reason for this is the subscription mechanism.
When your subscriber starts up it will send a subscription message to the publisher, who will then enter the subscription in the subscription data store. This subscription is for messages originating in a specific assembly version.
If the publisher then updates it's version of the messages DLL and receives a message which it needs to publish, it will do a lookup against the subscriptions it holds and evaluates each one in turn. Because the subscription exists for a previous version of the messages assembly the evaluation process will ignore that subscription entry, and therefore no message will be sent to the subscriber.
You need to be aware of this hard-dependency in the pub-sub scenario.
Hope this helps.
Edit
As of version 3.x of NServiceBus as long as your messages assembly major version is shared between publisher and subscriber then pub-sub will work as normal.

Related

How to handle data replication lag and event notification

We have a simple application, who upon every update of an entity sends out a notification to SNS(it could very well have been any other queuing system). Clients are listening to these notifications and they do a get of updated entity based on these notifications.
The problem we are facing is, when clients do a get, sometimes data is not replicated and we return 404 or sometimes stale data(even worse).
How can we mitigate this while sending notifications?
Here are Few strategies to mitigate this with pros and cons
Instead of sending notification from application send notification using database streams
For example dynamodb streams ans aws lambda. This pattern can be useful in the case of multiregion deployment as well. where all the subscriber, publisher will subscribe to their regional database streams. And also atomicity of sending message and writing to database is preserved. And we wont loose events in the case of regional failure.
Send delayed messages to your broker
Some borkers like activemq and sqs support this functionality, but SNS does not. A workaround for that could be writing to sqs queue which then writes to sns. This might be a good option when your database does not support streams.
Send special error code for retry-able gets
Since we know that eventual consistency is there we can return special error code to clients, so that they can retry based on this error code. The retry strategy should be exponential backoff. but this may mean giving away your problems to clients. Also we should have some sort of versioning in place.
Fetch from another region
If entity is not found in the same region application can go to another region or master database to fetch it. NOTE Don't do this. as it is an anti pattern. I am mentioning it here just for the sake of completion.
Send the full entity in message
If entities to be fetched by rest service is small and there are no security constrain around who can access what, we can send the full entity in message. This is ensure that client don't have to do explicit fetch of it every time a new message is arrived.

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

What's the recommended way to handle microservice processing bugs new insights?

Before I get to my question, let me sketch out a sample set of microservices to illustrate my dilemma.
Scenario outline
Suppose I have 4 microservices:
An activation service where features supplied to our customers are (de)activated. A registration service where members can be added and changed. A secured key service that is able to generate secure keys (in a multi step process) for members to be used when communicating with them with the outside world. And a communication service that is used to communicate about our members with external vendors.
The secured key service may however only request secured keys if this is a feature that is activated. Additionally, the communication service may only communicate about members that have a secured key AND if the communication feature itself is activated.
Because they are microservices, each of the services has it's own datastore and is completely self sufficient. That is, any data that is required from the other microservices is duplicated locally and kept in sync by means of asynchronous messages from the other microservices.
The dilemma
I'm actually facing two main dilemma's. The first is (pretty obviously) data synchronization. When there are multiple data stores that need to be kept in sync you have to account for messages getting lost or processed out of order. But there are plenty of out of the box solutions for this and when all fails you could even fall back to some kind of ETL process to keep things in sync.
The main issue I'm facing however is the actions that need to be performed. In the above example the secured key service must perform an action when it either
Receives a message from the registration service for a new member when it already knows that the secured keys feature is active in the activation service
Receives a message from the activation service that the secured keys feature is now active when it already knows about members from the registration service
In both cases this means that a message from the external system must lead to both an update in the local copy of the data as well as some logic that needs to be processed.
The question
Now to the actual question :)
What is the recommended way to cope with either bugs or new insights when it comes to handling those messages? Suppose there is a bug in the message handler from the activation service. The handler does update the internal data structure, but it fails to detect that there are already registered members and thus never starts the secure key generation process. Alternatively it could be that there's no bug, but we decide that there is something else we want the handler to do.
The system will have no reason to resubmit or reprocess messages (as the message didn't fail), but there's no real way for us to re-trigger the behavior that's behind the message.
I hope it's clear what I'm asking (and I do apologize if it should be posted on any of the other 170 Stack... sites, I only really know of StackOverflow)
I don't know what is the recommended way, I know how this is done in DDD and maybe this can help you as DDD and microservices are friends.
What you have is a long-running/multi-step process that involves information from multiple microservices. In DDD this can be implemented using a Saga/Process manager. The Saga maintains a local state by subscribing to events from both the registration service and the activation service. As the events come, the Saga check to see if it has all the information it needs to generate secure keys by submitting a CreateSecureKey command. The events may come in any order and even can be duplicated but this is not a problem as the Saga can compensate for this.
In case of bugs or new features, you could create special scripts or other processes that search for a particular situation and handle it by submitting specific compensating commands, without reprocessing all the past events.
In case of new features you may even have to process old events that now are interesting for your business process. You do this in the same way, by querying the events source for the newly interesting old events and send them to the newly updated Saga. After that import process, you subscribe the Saga to these newly interesting events and the Saga continues to function as usual.

Use Service Bus as a push notification workaround (excluding Notification Hub)

I'm developing an on-site WinRT application and would like to send push notifications when a new update is available (since it's an on-site app, the installation process requires a custom loader rather than the Windows Store app).
However, I'm trying to wrap my head around how such a system should ideally function.
I could obviously create a service which returns the latest version number and the app would periodically poll the service for that info. It would be easy to implement, but it seems like a very ugly approach (the need to send constant requests for the latest version doesn't seem like an elegant one).
I have the power of Azure at my disposal. Obviously a Notification Hub would be the preferred way. HOWEVER, I have no intention of getting a Windows Store account purely to develop a "private" on-site application. So using the Notification Hub is a no-go.
I've thought about using topics. This WOULD work, but every client would probably need to be added as a subscriber. While technically possible, the administrative overhead could be a major issue. I also thought about creating the subscribers dynamically by the WinRT application itself. However the SDK (Azure Messaging Managed) for handling this is too old and always throws an error (invalid date format, which apparently was changed at some point) when creating subscribers. The only "current" package from MS seems to be for the full .NET framework, not the limited WinRT counterpart.
I'm looking for ways around this problem. Either a newer, proper SDK (which can handle listing and creating subscriptions as well as receiving topic), or perhaps a completely different approach to the problem.
Note - I don't need any code just yet. Pseudocode or simply a description on how the communication would work should suffice.
For a limited scale deployment, Service Bus Topics may quite well be a feasible choice. Or you may want to take a look at IoT Hub as an alternative push notification channel.
The new Service Bus client SDK for .NET Standard lives this repo, but we do not build binaries as of yet: https://github.com/Azure/azure-service-bus-dotnet

Rebus unsubscribe automatically?

At what time does rebus unsubscribe? Is there anything that unsubscribes automatically after a period of time if a subscriber hasn't been running?
I have had some situations where it seems that when the subscriber service has been stopped for a period of time, the publisher doesn't keep sending messages their way. What might I be doing wrong?
Rebus unsubscribes when you bus.Unsubscribe<SomeMessage>().
If you experience that a subscriber suddenly no longer receives published messages, it is most likely because your publisher is "forgetting" the subscriptions somehow.
Did you start the publisher with the default in-mem subscription storage? Because that will not survive a restart of the publisher.
You will almost always be interested in having some way of actually persisting the subscriptions, using e.g. SQL Server to do it:
Configure.With(...)
.(...)
.Subscriptions(s => s.StoreInSqlServer(connectionString, "RebusSubscriptions")
.EnsureTableIsCreated())
.(...)
If you still experience what seems to be a randomly forgotten subscriber, would it by any chance be because you've renamed or moved the event class?
The subscription is stored in the publisher's subscription storage as a (eventType, subscriberInputQueue) tuple, but the actual storing of the eventType may vary depending on the chosen subscription storage. I can see that the XML subscription storage uses the type's assembly-qualified name as its key, whereas the SQL Server subscription storage uses the type's FullName as the key - iow the chosen subscription storage might have slightly different behavior (which I see is not optimal, but it's a consequence of the way the abstraction is designed).
When the publisher publishes, it will ask the subscription storage for the subscribers for a given event type, so it is this lookup that for some reason does not always return your subscriber's input queue.
Since the XML subscription storage uses assembly-qualified name, it will also be sensitive to version changes of your messages assembly.
I hope this can give you an indication on why you're experiencing dropped subscriptions :)

Resources