I'll try to keep the case as general as possible here: I'm writing a C++/CX application for Windows Phone 8.1 that manages a state, which is being changed in reaction to input coming, in turns, from different sources (e.g. app UI or network). I want to utilize an approach with a program loop that will, for each source, wait for input from it and then modify the state accordingly. The problem I'm having is that I could not find a good way to mirror the behavior of the await mechanism in C++/CX. Tasks seem to be the way to handle asynchronous data processing in C++/CX, but as far as I understand they are used for waiting for results of a well-defined operation, whereas I need to wait for an asynchronous event to happen and then act appropriately depending on the type of the event.
Is there an appropriate language construct, or a way to utilize tasks, to be used in this case?
Should I make use of basic multi-threading mechanisms, like semaphores, instead?
Alternatively, should I abandon this approach and handle state changes with events, securing the state from being otherwise modified?
Thanks in advance.
Related
I'd like to know, what is a proper way to implement my own cold source (publisher) using the Mutiny library.
Let's say there is huge file parser that should return lines as Multi<String> items according to the Subscriber's consumption rate.
New lines should be read only after previous were processed to optimize memory usage, while buffering a couple of hundred items to eliminate consumer idling.
I know about the Multi.createFrom.emitter() factory method, but using it I can't see a convenient way to implement the backpressure.
Does Mutiny have a idiomatic way to create cold sources that produce next items only after requested by the downstream, or in this case I supposed to implement my own Publisher using the Java Reactive Streams API and then wrap it in Multi?
You can use Multi.createFrom().generator(...).
The function is called for every request. But you can pass a "state" to remember where you are, typically an Iterator.
This is the opposite of the emitter approach (which does not check for requests but has a backpressure strategy attached to it).
If you need more fine-grain back-pressure support, you would need to implement a Publisher.
Let's say I have a stage controller and I want to write a method to move the stage. I want to be able to have the method either return after the stage has physically completed the stage move, or has started the stage move. For any kind of external control of hardware, I typically write async methods with a Task return. This way, users can await on the completion of the task, e.g. await the stage to finish it's move, or just call the move method, and await the returned task at a later point if necessary.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed? People I talk to seem to have an issue with using async methods; mostly because they feel it is too indeterminate for them for hardware control.
Is this the right approach for controller external hardware? Should these kind of methods be written synchronously with with separate methods used to determine operation completed?
I hesitate to use async for any kind of system that is driven by external forces. One that I've seen a lot is people try to use tasks to represent "the user pressed this button". And your example reminds me of that, but with external hardware in place of a person.
The problem with these kinds of approaches is twofold. First, it restricts you to a very linear logic flow. Second, it doesn't easily provide results other than success/fail. What if the hardware does something other than what was instructed? How easy is it to do logic that tries to do A but then times out waiting for state A' to be reached so it tries to do B?
Bear in mind that tasks must be completed. While it's possible to handle this using something like task cancellation (or hardware-specific exceptions), that can considerably complicate the logic code. Particularly when you consider timeouts, retries, and fallback logic.
So, I generally avoid using tasks for modeling that kind of domain. Something like an observable may be a better fit, or even just a Channel of state updates. Both of those permit the hardware to "push" its state and allows the logic code to respond appropriately, usually with a state machine of its own.
I'm trying to convert an application from using standard point-to-point MPI calls (e.g., MPI_Isend, MPI_Irecv) to using MPI-3's one-sided calls. My goal is to improve performance on my hardware, which is a system that has Infiniband hardware support and an MPI implementation that's optimized for RDMA calls. I've been told that the hardware performs particularly well with passive synchronization mode, as opposed to active synchronization (i.e., Post-Start-Complete-Wait).
However, even after reading through the MPI standard documentation and examples, I'm confused on how to actually use the calls. For context, my program has a setup phase where I will know the communication pattern and even the buffers of the send data and ultimate buffer of the receiver. So, it's straightforward to set up a window and use it.
Specifically, with passive synchronization, I'm confused about when the "receiver" knows the data in the window has been written by the sender. What I want to do is have the sender produce the message data, then call MPI_Win_lock on the window and then do an MPI_Put and then wait for completion with a MPI_Win_Unlock. But, what is an efficient / recommended way for the "receiver" (window target) of the data to know when the message data has been written? Similarly, given that the communication pattern is iterated and the same receive buffer (the target's buffer) is used multiple times, how do I know that the receiver is done consuming the buffer and it can be reused?
I can envision a couple of approaches:
I can use an MPI_Barrier after the MPI_Win_unlock and before the receiver accesses the data. (This seems that it would work but I'm skeptical that this would yield better performance than active synchronization.)
I can possibly use MPI_Lock and MPI_Unlock on the receiver (target), locking the window when the target is actually using the data so the access epoch can't start on the origin (but, is that the way it works? I've read that lock and unlock don't create critical sections in the traditional sense).
Some sort of home-grown approach where the receiver polls for some sort of a nonce to be written, knowing the data is available when that happens.
Docs for MPI_Win_lock: https://www.open-mpi.org/doc/v3.0/man3/MPI_Win_lock.3.php
In general, how does a programmer synchronize with MPI_Lock and 'MPI_Unlock` in a way that's any more efficient than the active synchronization approach? It does feel like I need to just use post-start-complete-wait, but I'm hoping you can help me find a way to try passive synchronization as well.
I'm writing an app using RxAndroidBle library (which is great) but due to a requirement of the Device vendor we must be able to cancel all operations after the device has reached a certain state. Is there a way of cancelling any read/write op that has been sent to the RxBleConnection?
Any ideas?
Thanks!
The library uses an internal ConnectionOperationQueue which tries to remove operations that have not yet reached execution state. A concrete implementation would vary case-to-case but using RxJava it should be fairly easy to achieve:
// First create a completable that will fulfil appropriately
Completable deviceReachedACertainState = ...;
// Then use it for every operation that you want to cancel (unsubscribe) when a certain state happens
Observable<byte[]> cancellableRead = rxBleConnection.readCharacteristic(UUID).takeUntil(deviceReachedACertainState);
Keep in mind that operations that were already taken off the queue for execution will not be cancelled.
Assume that i have function called PlaceOrder, which when called inserts the order details into local DB and puts a message(order details) into a TIBCO EMS Queue.
Once message received, a TIBCO BW will then invoke some other system(say ExternalSystem) to pass on the order details.
Now the way i wrote my integration tests is
Call the Place Order
Sleep, and check details exists in local DB
Sleep and check details exists in ExternalSystem.
Is the above approach correct? Above test gives me confidence that, End to End integration is working, but are there any better way to test above scenario?
The problem you describe is quite common, and your approach is a very typical solution.
The problem with this solution is that if the delay is too short, your tests may sometimes pass and sometimes fail, but if the delay is very long, then your just wasteing time waiting, and with many tests, it can add a lot of delay. But unless you can get some signal to tell you the order arrived in the database, then you just have to wait.
You can reduce the delay by doing lots of checks with short intervals. If you're order is not there after timeout, then you would fail the test.
In "Growing Object-Oriented Software, Guided by Tests"*, there is a chapter on this very subject, so you might want to get a copy if you will be doing a lot of this sort of testing.
"There are two ways a test can observe the system: by sampling its observable
state or by listening for events that it sends out. Of these, sampling is
often the only option because many systems don’t send any monitoring
events. It’s quite common for a test to include both techniques to interact
with different “ends” of its system"
(*) http://my.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321574442