I'm sending messages using an asynchronous write operation, but when the app is closed I need to write 2 messages to a device, but only 1 message gets successfully written.
Each write operation is chained to a message queue, so the messages write operations are sent sequentially after each completed write operation while the message queue is full. So basically the following code might get to the first completed callback, but it doesn't reach the 2nd before the app closes. I tried adding a Windows Sleep call in between and after Async Operations, but this didn't work. I also tested waiting for the completion callback using a while loop to see if the 2nd opertation ever completes, which it never does.
ComPtr<IAsyncOperation<GattCommunicationStatus>> writeOp;
GattWriteOption option = GattWriteOption_WriteWithoutResponse;
hr = customCharacteristic->WriteValueWithOptionAsync(buffer.Get(), option, &writeOp);
hr = writeOp->put_Completed(Callback<IAsyncOperationCompletedHandler<GattCommunicationStatus>>
(this, &DataGloveBluetooth::OnCharacteristicWriteComplete).Get());
This is only an issue on app close as it seems 2nd operation never gets to the callback. Also, the messages are too large to put together, so I need to be able to send more than 1 message.
Is there a proper way I can wait? This is some pseudo-code to help explain the ordering:
WriteMessage(LED_RESET); // adds to message queue, then calls aync op,
WriteMessage(CLOSE); // adds to message queue, async op is called once first message is sent, complete callback is never reached
Sleep(5000) // whatever sleep amount never helps the 2nd message finish
Related
My painful hunt for this feature is fully described in disgustingly log question: Several last offsets aren't getting commited with reactive kafka and it shows my multiple attemps with different failures.
How would one subscribe to ReactiveKafkaConsumerTemplate<String, String>, which will process the records in synchronous way (for simplicity), and will ack/commit every 2s AND upon manual cancellation of stream? Ie. it works, ack/commits every 2s. Then via rest/jmx/whatever comes signal, the stream terminates and ack/commits the last processed kafka record.
After a lot of attempts I was able to come up with following solution. It seems to work, but it's kinda ugly, because it's very "white-box" where outer flow highly depends on stuff happening inside of other methods. Please criticise and suggest improvements. Thanks.
kafkaReceiver.receive()
.flatMapSequential(receivedKafkaRecord -> processKafkaRecord(receivedKafkaRecord), 16)
.takeWhile(e-> !stopped)
.sample(configuration.getKafkaConfiguration().getCommitInterval())
.concatMap(offset -> {
log.debug("ack/commit offset {}", offset.offset());
offset.acknowledge();
return offset.commit();
})
.doOnTerminate(()-> log.info("stopped."));
What didn't work:
A) you cannot use Disposable.dispose, since that would break the stream and your latest processed record won't be committed.
B) you cannot put take on top of stream, as that would cancel the stream and you won't be able to commit either.
C) not sure how I'd be able to intercorporate usage of errors here.
Because of what didn't work stream termination is triggered by boolean field named stopped, which can be set anyhow.
Flow explained:
flatMapSequential — because of inner parallelism and necessity to commit N only if all N-1 was processed.
processKafkaRecord returns Mono<ReceiverOffset>, ie. the offset of processed record to have something to ack/commit. When stopped the method will skip processing and return Mono.empty
take will stop stream if stopped, this has to be put here becaue of possibility of whole sample interval consisting only from "empties"
rest is simple: sample by given interval, commit in order. If sample does return empty record, commit is skipped. Finally we log that stream is cancelled.
If anyone know how to improve, please criticise.
In the example on dart.dev the Future prints the message after the main function was done.
Why might the Future work after the main function was done? At first glance, after the completion of the main function, the entire work of the program is expected to be completed (and the Future must be cancelled).
The example code:
Future<void> fetchUserOrder() {
// Imagine that this function is fetching user info from another service or database.
return Future.delayed(Duration(seconds: 2), () => print('Large Latte'));
}
void main() {
fetchUserOrder();
print('Fetching user order...');
}
The program prints
Fetching user order...
Large Latte
I've expected just the following
Fetching user order...
This has to do with the nature of futures and asynchronous programming. Behind the scenes, Dart manages something called the asynchronous queue. When you initiate a future (either manually like you did with Future.delayed or implicitly by calling a method marked async, that function's execution goes into the queue whenever its execution gets deferred. Every cycle when Dart's main thread is idle, it checks the futures in the queue to see if any of them are no longer blocked, and if so, it resumes their execution.
A Dart program will not terminate while futures are in the queue. It will wait for all of them to either complete or error out.
The first question is when the event loop starts ?
I read in a site that it's start after the main method
but why when we try something like this
main()async {
Future(()=>print('future1'));
await Future(()=>print('future2'));
print('end of main');
}
//the output is :
//future1
//future2
//end of main
in this example the event loop start when we use the await keyword and
after the event loop reaches the future2 it's paused ?
or i am wrong :(
The second question is how the events is added to event queue
if it's FIFO why in this example the future 2 is completed before
future 1
main(){
Future.delayed(Duration(seconds:5) , ()=>print('future1'));
Future.delayed(Duration(seconds:2) , ()=>print('future2'));
}
The event loop run when there is nothing else running (e.g. main method is done, you are waiting for some future to complete).
Your example makes sense because the first line puts an event on event queue so now the first item in the queue is "print('future1')". In the next line, you are putting another event on the queue which calls "print('future2')" and now you await for this event to be done.
Since your main method is not waiting for something then the event loop is going to be executed. Since the first event on the queue was "print('future1')" then this is going to be executed first. But since the main method is still waiting for the future "print('future2')" to be complete then the event loop takes another event to be executed which are going to be "print('future2')".
Since this event was the one the main method was waiting for (and there is no more event on the event queue) then main() are going to run the last call "print('end of main')".
In your next example, you think that Future and Future.delayed are the same which it is not. With Future.delayed there are not going any event in the event queue before. Instead, there are running a thread outside the VM which knows when the next timer should trigger which ends up putting an event on the queue. So the event is only being put on the event queue when the timer has been expired (and therefore, the future2 are going to be executed first).
I’m writing functionality for receiving messages from Azure Service Bus Topic and delete the specified message from Topic. Before deleting that message, I need to send that message to other Topic.
static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
// Process the message.
Console.WriteLine($"Received message: WorkOrderNumber:{message.MessageId} SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
Console.WriteLine("Enter the WorkOrder Number you want to delete:");
string WorkOrderNubmer = Console.ReadLine();
if (message.MessageId == WorkOrderNubmer)
{
//TODO:Post message into other topic(Priority) then delete from this current topic.
var status=await SendMessageToBus(message);
if (status == true)
{
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
Console.WriteLine($"Successfully deleted your message from Topic:{NormalTopicName}-WorkOrderNumber:" + message.MessageId);
}
else
{
Console.WriteLine($"Failed to send message to PriorityTopic:{PriorityTopicName}-WorkOrderNumber:" + message.MessageId);
}
}
else
{
Console.WriteLine($"Failed to delete your message from Topic:{NormalTopicName}-WorkOrderNumber:" + WorkOrderNubmer);
// Complete the message so that it is not received again.
// This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
await normalSubscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
// Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
// If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
// to avoid unnecessary exceptions.
}
}
My issue with this approach is:
It’s not scalable; what if the message is the 50th in the collection? We’d have to iterate through 49 times and mark i.e deleted.
It’s a long-running process.
To avoid these problems, I want to get the specified message from the queue based on Index or sequence number then I can delete that from the topic.
So, can anyone suggest me how to resolve this problem?
So if I understand your questions and comments correctly you are trying to do something like this:
Incoming messages come into either a standard topic or priority
topic.
Some process checks messages in the standard topic and
"moves" them to the priority topic based on some criteria by
deleting them from the standard topic and adding them to the
priority topic.
Messages are processed as normal.
As Sean noted, step 2 simply won't work. Service Bus is a first=in-first-out-ish system where a consumer simply picks up the next available message. You can sort through a queue by pulling out all the messages and abandoning/completing them based on specific criteria, but scaling is a problem. In addition, you can think of each topic subscription as its own separate queue- removing a message form one subscription does not remove it from any of the other subscriptions.
What I would suggest instead of trying to pull out everything from the topics and then putting back the ones you want to keep, add a sorting queue in front of the two topics. If you don't need to sort the high priority messages you could put this sorting process in front of the standard priority topic only.
This is how the process would work:
Incoming messages are added to a sorting queue Note that this is a single queue, not a topic. At this point in the process we want to ensure there is only one copy of each message.
A sorting process moves messages from the sorting queue into either the standard or priority queue as is appropriate. Using something like Azure Functions you can scale this process fairly easily.
Messages are processed from the topics as normal.
My Node.js application accepts connections from the outside. Each connection handler reads a SET on Redis, eventually modifies the set itself, then moves on. The problem is that in the meanwhile another async connection can try to read the same SET and try to update it or decide its next step based on what it reads.
I know that Redis does its best to be atomic, but this is not quite sufficient for my use case. Think about this: the set is read to understand if it's FULL (there is a business rule for that). If it's FULL, then something happens. The problem is that if there is one only slot left, two semi-concurrent connections could think each one is the last one. And I get an overflow.
I there a way to keep a connection "waiting" for the very short time the other eventually needs to update the set state?
I think this is a corner case, very very unluckely... but you know :)
Using another key as the "lock" is an option, or does it stink?
How about using blpop to do locking. blpop key 5 to wait 5 seconds for key. At start put item(to identify queue is not empty) at key. The connection acquiring the lock should remove item from key. The next connect then can't acquire the lock, because empty, but blpop has the following nice property:
Multiple clients can block for the same key. They are put into a
queue, so the first to be served will be the one that started to wait
earlier, in a first-BLPOP first-served fashion.
When connection which acquired lock has finished task it should put back item back in queue, then the next connection waiting can acquire lock(item).
You may be looking for WATCH with MULTI/EXEC. Here's the pattern that both threads follow:
WATCH sentinel_key
GET value_of_interest
if (value_of_interest = FULL)
MULTI
SET sentinel_key = foo
EXEC
if (EXEC returned 1, i.e. succeeded)
do_something();
else
do_nothing();
else
UNWATCH
The way this works is that all of the commands between MULTI and EXEC are queued up but not actually executed until EXEC is called. When EXEC is called, before actually executing the queued instructions it checks to see if sentinel_key has changed at all since the WATCH was set; if it has, it returns (nil) and the queued commands are discarded. Otherwise the commands are executed atomically as a block, and it returns the number of commands executed (1 in this case), letting you know you won the race and do_something() can be called.
It's conceptually similar to the fork()/exec() Unix system calls - the return value from fork() tells you which process you are (parent or child). In this case it tells you whether you won the race or not.