How to get specified message from Azure Service Bus Topic and then delete it from Topic? - .net-core

I’m writing functionality for receiving messages from Azure Service Bus Topic and delete the specified message from Topic. Before deleting that message, I need to send that message to other Topic.
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: WorkOrderNumber:{message.MessageId} SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
Console.WriteLine("Enter the WorkOrder Number you want to delete:");
string WorkOrderNubmer = Console.ReadLine();
if (message.MessageId == WorkOrderNubmer)
{
//TODO:Post message into other topic(Priority) then delete from this current topic.
var status=await SendMessageToBus(message);
if (status == true)
{
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
Console.WriteLine($"Successfully deleted your message from Topic:{NormalTopicName}-WorkOrderNumber:" + message.MessageId);
}
else
{
Console.WriteLine($"Failed to send message to PriorityTopic:{PriorityTopicName}-WorkOrderNumber:" + message.MessageId);
}
}
else
{
Console.WriteLine($"Failed to delete your message from Topic:{NormalTopicName}-WorkOrderNumber:" + WorkOrderNubmer);
// Complete the message so that it is not received again.
// This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
// Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
// If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
// to avoid unnecessary exceptions.
}
}
My issue with this approach is:
It’s not scalable; what if the message is the 50th in the collection? We’d have to iterate through 49 times and mark i.e deleted.
It’s a long-running process.
To avoid these problems, I want to get the specified message from the queue based on Index or sequence number then I can delete that from the topic.
So, can anyone suggest me how to resolve this problem?

So if I understand your questions and comments correctly you are trying to do something like this:
Incoming messages come into either a standard topic or priority
topic.
Some process checks messages in the standard topic and
"moves" them to the priority topic based on some criteria by
deleting them from the standard topic and adding them to the
priority topic.
Messages are processed as normal.
As Sean noted, step 2 simply won't work. Service Bus is a first=in-first-out-ish system where a consumer simply picks up the next available message. You can sort through a queue by pulling out all the messages and abandoning/completing them based on specific criteria, but scaling is a problem. In addition, you can think of each topic subscription as its own separate queue- removing a message form one subscription does not remove it from any of the other subscriptions.
What I would suggest instead of trying to pull out everything from the topics and then putting back the ones you want to keep, add a sorting queue in front of the two topics. If you don't need to sort the high priority messages you could put this sorting process in front of the standard priority topic only.
This is how the process would work:
Incoming messages are added to a sorting queue Note that this is a single queue, not a topic. At this point in the process we want to ensure there is only one copy of each message.
A sorting process moves messages from the sorting queue into either the standard or priority queue as is appropriate. Using something like Azure Functions you can scale this process fairly easily.
Messages are processed from the topics as normal.

Related

Axon Partialy replay, how do i get a TrackingToken for the startPosition for the replay?

I want my Axon replay events, not all but partially.
A full replay is up and running but when i want a partially replay i need a TrackingToken startPosition for the method resetTokens(), my problem is how to get this token for the partial replay?
I tried with GapAwareTracingToken but this does not work.
public void resetTokensWithRestartIndexFor(String trackingEventProcessorName, Long restartIndex) {
eventProcessingConfiguration
.eventProcessorByProcessingGroup(trackingEventProcessorName, TrackingEventProcessor.class)
.filter(trackingEventProcessor -> !trackingEventProcessor.isReplaying())
.ifPresent(trackingEventProcessor -> {
// shutdown this streaming processor
trackingEventProcessor.shutDown();
// reset the tokens to prepare the processor with start index for replay
trackingEventProcessor.resetTokens(GapAwareTrackingToken.newInstance(restartIndex - 1, Collections.emptySortedSet()));
// start the processor to initiate the replay
trackingEventProcessor.start();
});
}
When i use the GapAwareTrackingToken then i get the exception:
[] - Resolved [java.lang.IllegalArgumentException: Incompatible token type provided.]
I see that there is also a GlobalSequenceTrackingToken i can use, but i don't see any documentatieon about when these can/should be used.
The main "challenge" when doing a partial reset, is that you need to be able to tell where to reset to. In Axon, the position in a stream is defined with a TrackingToken.
The source that you read from will provide you with such a token with each event that it provides. However, when you're doing a reset, you probably didn't store the relevant token while you were consuming those events.
You can also create tokens using any StreamableMessageSource. Generally, this is your Event Store, but if you read from other sources, it could be something else, too.
The StreamableMessageSource provides 4 methods to create a token:
createHeadToken - the position at the most recent edge of the stream, where only new events will be read
createTailToken - the position at the very beginning of the stream, allowing you to replay all events.
createTokenAt(Instant) - the most recent position in the stream that will return all events created on or after the given Instant. Note that some events may still have a timestamp earlier than this timestamp, as event creation and event storage isn't guaranteed to be the same.
createTokenSince(Duration) - similar to createTokenAt, but accepting an amount of time to go back.
So in your case, createTokenAt should do the trick.

Corda - Does Finality flow ends before states were recorded in all participants vaults?

Recently I came across a problem when it seems that when the flow ends on initiator's node, and immediately after that I query the output state of the transaction in vaults of all transaction participant nodes, it turns out that the state is present only on initiator's node, and only after a while it appears in vaults of other participants nodes.
Reading the documentations here "Send the transaction to the counterparty for recording", and it does not say that it will wait until counterparty will successfully record the transaction and its states to their vault, which sort of confirms that the issue that I am facing is actually the way Corda is implemented and not a bug or so.
On the other hand, it seems not that logical to end a flow without being sure that on the counterparty node everything was finished successfully and all the states are written in their vaults.
And also I looked into the code, and reached to this method in ServiceHubInternal, which seems to be responsible for recording states into vault.
fun recordTransactions(statesToRecord: StatesToRecord,
txs: Collection<SignedTransaction>,
validatedTransactions: WritableTransactionStorage,
stateMachineRecordedTransactionMapping: StateMachineRecordedTransactionMappingStorage,
vaultService: VaultServiceInternal,
database: CordaPersistence) {
database.transaction {
require(txs.isNotEmpty()) { "No transactions passed in for recording" }
val orderedTxs = topologicalSort(txs)
val (recordedTransactions, previouslySeenTxs) = if (statesToRecord != StatesToRecord.ALL_VISIBLE) {
orderedTxs.filter(validatedTransactions::addTransaction) to emptyList()
} else {
orderedTxs.partition(validatedTransactions::addTransaction)
}
val stateMachineRunId = FlowStateMachineImpl.currentStateMachine()?.id
if (stateMachineRunId != null) {
recordedTransactions.forEach {
stateMachineRecordedTransactionMapping.addMapping(stateMachineRunId, it.id)
}
} else {
log.warn("Transactions recorded from outside of a state machine")
}
vaultService.notifyAll(statesToRecord, recordedTransactions.map { it.coreTransaction }, previouslySeenTxs.map { it.coreTransaction })
}
}
And it does not seem to me that this method is doing something async, so I am really confused.
And so the actual question is:
Does Initiator flow in Corda actually waits until all the relevant states are recorded in vaults of all participant nodes before it finishes, or it finishes right after it sends the states to participant nodes for recording without waiting for a confirmation from their side that states were recorded?
Edited
So in case that Corda by default does not wait for counterparty flows to store states in their vaults, but my implementation needs this behavior anyways, would it be good solution to implement the following:
At the very end of Initiator flow, before returning from method call receiveAll in order to suspend and wait, then in the very end of the receiver flow, before returning from method do a vault query with trackBy to wait until the state of an interest is recorded in the vault, and whenever it is, call sendAll to notify the initiator's receiveAll method. And finish the initiator's flow only when it has received confirmation from all receivers..
Would it be a normal approach to solve this problem? can it have any drawbacks or side effects that you can think of?
Corda is able to handle your scenario, it is actually explained here under the Error handling behaviour section; below is an excerpt, but I recommend reading the full section:
To recover from this scenario, the receiver’s finality handler will automatically be sent to the Flow Hospital where it’s suspended and retried from its last checkpoint upon node restart
The Initiator flow is not responsible for the storage of the states in the Responder's vault. So, there is no storage confirmation from the Responder, since it has already checked the transaction and provided signatures. So, from the Initiator's point of view it's all good once the Transaction has been notarised and stored on its side, it is up to the Responder to manage errors in its storage phase, as mentioned in the previous comment.

Why does flutter firestore plugin not worry about closing its sinks (snapshot streams)?

In the flutter firestore codebase you can find a comment about the stream it creates when you run snapshots() on a query.
// It's fine to let the StreamController be garbage collected once all the
// subscribers have cancelled; this analyzer warning is safe to ignore.
StreamController<QuerySnapshotPlatform> controller; // ignore: close_sinks
I want to wrap my resulting snapshot streams with a BehaviorSubject so I can keep track of the latest entry. This is useful when I have one stream that is at the top of a page that I want to be consumed through different widgets farther down in my tree without reloading the stream each time. Without keeping track in a BehaviorSubject or elsewhere if a new widget starts listening to that stream it does not get the most recent information from Firestore as it missed that event.
Can I also not worry about closing the behavior subject I am going to create as it will be garbage collected when there are no more listeners? Or is there another way to achieve what I am wanting?
I'm picturing code like this:
final snapshotStream = _firestore.collection('users').snapshots();
final behaviorSubjectStream = BehaviorSubject();
behaviorSubjectStream.addStream(snapshotStream);
return behaviorSubjectStream;
This will get a complaint that I don't close the behaviorSubjectStream. Is it ok to ignore?
That depends on how you listen to the subject.
From what you describe, it sounds safe to ignore the hint. When the subscriptions that listen to the subject are cancelled, the subject will be cancelled as well (when the garbage collector finds it).
There are situations where you have a subscription that is still listening, but you want the subject to stop emitting. In that case you will need to close() the subject.
You can test that the subject is correctly cancelled by adding
behaviorSubjectStream.onCancel = () {
print("onCancel");
};
Then you can test it by playing around with your app.

In Disassembler pipeline component - Send only last message out from GetNext() method

I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.
I have extended FFDasm and called Disassembler(), then we have GetNext() which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out from GetNext() until Last message.
Please help if anyone have already implemented this requirement. Thanks!
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:
public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
base.Disassemble(pContext, pInMsg);
IBaseMessage message = base.GetNext(pContext);
while (message != null)
{
// Only store one message
if (this.messagesCount == 0)
{
// _message is a Queue<IBaseMessage>
this._messages.Enqueue(message);
this.messagesCount++;
}
message = base.GetNext(pContext);
}
}
catch (Exception ex)
{
// Manage errors
}
Then on GetNext method, you have the queue and you can return whatever you want:
public new IBaseMessage GetNext(IPipelineContext pContext)
{
return _messages.Dequeue();
}
The recommended approach is to publish messages after disassemble stage to BizTalk message box db and use a db adapter to insert into database. Publishing messages to message box and using adapter will provide you more options on design/performance and will decouple your DB insert from receive logic. Also in future if you want to reuse the same message for something else, you would be able to do so.
Even then for any reason if you have to insert from pipeline component then do the following:
Please note, GetNext() method of IDisassembler interface is not invoked until Disassemble() method is complete. Based on this, you can use following approach assuming you have encapsulated FFDASM within your own custom component:
Insert all disassembled messages in disassemble method itself and enqueue only the last message to a Queue class variable. In GetNext() message then return the Dequeued message, when Queue is empty return null. You can optimize the DB insert by inserting multiple rows at a time and saving them in batches depending on volume. Please note this approach may encounter performance issues depending on the size of file and number of rows being inserted into db.
I am calling DBInsert SP from GetNext()
Oh...so...sorry to say, but you're doing it wrong and actually creating a bunch of problems doing this. :(
This is a very basic scenario to cover with BizTalk Server. All you need is:
A Pipeline Component to Promote BTS.InterchageID
A Sequential Convoy Orchestration Correlating on BTS.InterchangeID and using Ordered Delivery.
In the Orchestration, call the SP, transform to SOAP, call the SOAP endpoint, whatever you need.
As you process the Messages, check for BTS.LastInterchagneMessage, then perform your close out logic.
To be 100% clear, there are no practical 'performance' issues here. By guessing about 'performance' you've actually created the problem you were thinking to solve, and created a bunch of support issues for later on, sorry again. :( There is no reason to not use an Orchestration.
As noted, 25K records isn't a lot. Be sure to have the Receive Location and Orchestration in different Hosts.

Does C++ Actor Framework guarantee message order?

Can C++ Actor Framework be used in such a way that it guarantees message ordering between two actors? I couldn't find anything about this in the manual.
If you have only two actors communicating directly, CAF guarantees that messages arrive in the order they have been sent. Only multi-hop scenarios can cause non-determinism and message reordering.
auto a = spawn(A);
self->send(a, "foo");
self->send(a, 42); // arrives always after "foo"
At the receiving end, it is possible to change the message processing order by changing the actor behavior with become:
[=](int) {
self->become(
keep_behavior,
[=](const std::string&) {
self->unbecome();
}
);
}
In the above example, this will process the int before the string message, even though they have arrived in opposite order at the actor's mailbox.

Resources