Is there a way to send multiple transactions to counterparty without looping - corda

Is there a way to send multiple transactions to a counterparty without using a loop in the flow? Sending one tx a time in a loop impacts the performance significantly since Suspendable behaviour doesn't work well with large volumn of txes.
At some point in time, T, an initiator may be interested in sending N numbers of transactions to a regulator/counterparty. But the current SendTransactionsFlow only send one tx at a time. And on the other side, it ReceiveTransactionFlow to record one by one.
My current code
relevantTxes.forEach{
subFlow(SendTransactionFlow(session, signedTx))
}
Is there a way to do something along the line of
subFlow(SendTransactionFlow(session, relevantTxes))

You can send the list of transactions without invoking a subflow by using send and receive.
On the sender's side:
val session = initiateFlow(otherParty)
session.send(relevantTxes)
On the receiver's side:
session.receive<List<SignedTransaction>>().unwrap { relevantTxes -> relevantTxes }

Related

How can I stream arrays with Grpc?

I want to transport an array of int64.
I looked up how to use it. In my proto-file I either need a stream:
service myService {
rpc GetValues(myRequest) returns (stream myResponse);
}
message myRequest {
}
message myResponse {
int64 values = 1;
}
or a repeated response:
message myRepeatedResponse {
repeated int64 value = 1;
}
Is one option better than the other?
My use case is that I want to read the latest x entries from my Database and send these values as an array to my client.
But I didn't get how I am supposed to do it, because when assigning the values in the overridden function of MyService.MyService.Base I can only pass values of type 'long' and not 'long[]'.
For the stream vs repeated question, the answer is: it depends.
The distinction between the two is that:
streaming sends one or more messages (each message possibly containing repeated fields)
unary sends a single message containing a repeated field
So, I think your decision is based upon:
how the server obtains the repeated field.
the size of the message (including the repeated field that's being sent)
the 'integrity' of the message
If the server is unable to obtain the entirety of the repeated field in one go, then your answer is simpler; the server will need to stream the messages (including the repeated field) as it obtains them.
By 'integrity' of the message, is there some reason why decomposing the message into many (to stream) is problematic. If the repeated field must be transmitted as a single chunk, almost as a transactional unit, then I think you may prefer to not stream the message as chunks.
You should also consider the consequence on your client(s). Are your clients able to receive one larger message or, would many smaller messages be preferred, e.g. an IoT SoC device that's resource constrained.
Otherwise, if individual messages are large1, then you'd want to decompose them into smaller 'bites' and stream them.
1: Large Data Set and note that there is a hard limit of 2GB/message.

Corda - Does Finality flow ends before states were recorded in all participants vaults?

Recently I came across a problem when it seems that when the flow ends on initiator's node, and immediately after that I query the output state of the transaction in vaults of all transaction participant nodes, it turns out that the state is present only on initiator's node, and only after a while it appears in vaults of other participants nodes.
Reading the documentations here "Send the transaction to the counterparty for recording", and it does not say that it will wait until counterparty will successfully record the transaction and its states to their vault, which sort of confirms that the issue that I am facing is actually the way Corda is implemented and not a bug or so.
On the other hand, it seems not that logical to end a flow without being sure that on the counterparty node everything was finished successfully and all the states are written in their vaults.
And also I looked into the code, and reached to this method in ServiceHubInternal, which seems to be responsible for recording states into vault.
fun recordTransactions(statesToRecord: StatesToRecord,
txs: Collection<SignedTransaction>,
validatedTransactions: WritableTransactionStorage,
stateMachineRecordedTransactionMapping: StateMachineRecordedTransactionMappingStorage,
vaultService: VaultServiceInternal,
database: CordaPersistence) {
database.transaction {
require(txs.isNotEmpty()) { "No transactions passed in for recording" }
val orderedTxs = topologicalSort(txs)
val (recordedTransactions, previouslySeenTxs) = if (statesToRecord != StatesToRecord.ALL_VISIBLE) {
orderedTxs.filter(validatedTransactions::addTransaction) to emptyList()
} else {
orderedTxs.partition(validatedTransactions::addTransaction)
}
val stateMachineRunId = FlowStateMachineImpl.currentStateMachine()?.id
if (stateMachineRunId != null) {
recordedTransactions.forEach {
stateMachineRecordedTransactionMapping.addMapping(stateMachineRunId, it.id)
}
} else {
log.warn("Transactions recorded from outside of a state machine")
}
vaultService.notifyAll(statesToRecord, recordedTransactions.map { it.coreTransaction }, previouslySeenTxs.map { it.coreTransaction })
}
}
And it does not seem to me that this method is doing something async, so I am really confused.
And so the actual question is:
Does Initiator flow in Corda actually waits until all the relevant states are recorded in vaults of all participant nodes before it finishes, or it finishes right after it sends the states to participant nodes for recording without waiting for a confirmation from their side that states were recorded?
Edited
So in case that Corda by default does not wait for counterparty flows to store states in their vaults, but my implementation needs this behavior anyways, would it be good solution to implement the following:
At the very end of Initiator flow, before returning from method call receiveAll in order to suspend and wait, then in the very end of the receiver flow, before returning from method do a vault query with trackBy to wait until the state of an interest is recorded in the vault, and whenever it is, call sendAll to notify the initiator's receiveAll method. And finish the initiator's flow only when it has received confirmation from all receivers..
Would it be a normal approach to solve this problem? can it have any drawbacks or side effects that you can think of?
Corda is able to handle your scenario, it is actually explained here under the Error handling behaviour section; below is an excerpt, but I recommend reading the full section:
To recover from this scenario, the receiver’s finality handler will automatically be sent to the Flow Hospital where it’s suspended and retried from its last checkpoint upon node restart
The Initiator flow is not responsible for the storage of the states in the Responder's vault. So, there is no storage confirmation from the Responder, since it has already checked the transaction and provided signatures. So, from the Initiator's point of view it's all good once the Transaction has been notarised and stored on its side, it is up to the Responder to manage errors in its storage phase, as mentioned in the previous comment.

How to get specified message from Azure Service Bus Topic and then delete it from Topic?

I’m writing functionality for receiving messages from Azure Service Bus Topic and delete the specified message from Topic. Before deleting that message, I need to send that message to other Topic.
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: WorkOrderNumber:{message.MessageId} SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
Console.WriteLine("Enter the WorkOrder Number you want to delete:");
string WorkOrderNubmer = Console.ReadLine();
if (message.MessageId == WorkOrderNubmer)
{
//TODO:Post message into other topic(Priority) then delete from this current topic.
var status=await SendMessageToBus(message);
if (status == true)
{
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
Console.WriteLine($"Successfully deleted your message from Topic:{NormalTopicName}-WorkOrderNumber:" + message.MessageId);
}
else
{
Console.WriteLine($"Failed to send message to PriorityTopic:{PriorityTopicName}-WorkOrderNumber:" + message.MessageId);
}
}
else
{
Console.WriteLine($"Failed to delete your message from Topic:{NormalTopicName}-WorkOrderNumber:" + WorkOrderNubmer);
// Complete the message so that it is not received again.
// This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
// Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
// If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
// to avoid unnecessary exceptions.
}
}
My issue with this approach is:
It’s not scalable; what if the message is the 50th in the collection? We’d have to iterate through 49 times and mark i.e deleted.
It’s a long-running process.
To avoid these problems, I want to get the specified message from the queue based on Index or sequence number then I can delete that from the topic.
So, can anyone suggest me how to resolve this problem?
So if I understand your questions and comments correctly you are trying to do something like this:
Incoming messages come into either a standard topic or priority
topic.
Some process checks messages in the standard topic and
"moves" them to the priority topic based on some criteria by
deleting them from the standard topic and adding them to the
priority topic.
Messages are processed as normal.
As Sean noted, step 2 simply won't work. Service Bus is a first=in-first-out-ish system where a consumer simply picks up the next available message. You can sort through a queue by pulling out all the messages and abandoning/completing them based on specific criteria, but scaling is a problem. In addition, you can think of each topic subscription as its own separate queue- removing a message form one subscription does not remove it from any of the other subscriptions.
What I would suggest instead of trying to pull out everything from the topics and then putting back the ones you want to keep, add a sorting queue in front of the two topics. If you don't need to sort the high priority messages you could put this sorting process in front of the standard priority topic only.
This is how the process would work:
Incoming messages are added to a sorting queue Note that this is a single queue, not a topic. At this point in the process we want to ensure there is only one copy of each message.
A sorting process moves messages from the sorting queue into either the standard or priority queue as is appropriate. Using something like Azure Functions you can scale this process fairly easily.
Messages are processed from the topics as normal.

Does C++ Actor Framework guarantee message order?

Can C++ Actor Framework be used in such a way that it guarantees message ordering between two actors? I couldn't find anything about this in the manual.
If you have only two actors communicating directly, CAF guarantees that messages arrive in the order they have been sent. Only multi-hop scenarios can cause non-determinism and message reordering.
auto a = spawn(A);
self->send(a, "foo");
self->send(a, 42); // arrives always after "foo"
At the receiving end, it is possible to change the message processing order by changing the actor behavior with become:
[=](int) {
self->become(
keep_behavior,
[=](const std::string&) {
self->unbecome();
}
);
}
In the above example, this will process the int before the string message, even though they have arrived in opposite order at the actor's mailbox.

Client-Server how to identify two different methods

I am writing a code for Client-Server and there are two possibilities.
The user will request a specific information A to be transmitted.
The user will request a specific information B to be transmitted.
I would like to identify what the client want in my server side and how the client will do that.
Any other ideas?
I know is quite old question but I think what will be a good idea is to use the Chain of Responsibility Design Pattern!
The idea is that you can use a single port and send your request to Receiver 1, Receiver 1 will decide if can handle this request if not, will pass the request to the Receiver 2, Receiver 2 will have to do the same decision and if can handle the request, then will send the response back to the Sender.
So we have the following properties:
One port is required
The Sender(or the Client in other words) is only aware of the 1st Receiver.
The responsible receiver will return a response directly to the sender/client even if the sender/client is not aware of that specific receiver.
Reduced coupling
Dynamically manage the request handlers.
Furthermore, at the end of the chain, you can add behavior to indicate something like a final response, or a default response if the request send has no responsible class to handle it.
UML
Example
Depending on the size of the information, you can always transmit both information through one pipe and then decipher the needed one on the user side
string data = // .. data transmitted.
string[] dataSplit = data.Split(SEPARATOR);
// dataSplit[0] is type of information
switch(dataSplit[0]) {
case 'Name':
...
break;
case 'OS':
...
break;
}
Do you understand ?

Resources