XCTest: how to check whether an asynchronous element exists - asynchronous

I work on test automation for an app that communicates with the server. The app has 7 pre-defined strings. Depending on the info the server returns, which is not deterministic and depends on external factors, the app places one to three of the seven pre-defined strings in a table view as hittable static texts. The user has a choice which of those strings to tap.
To automate this test I need an asynchronous way to determine in the test code which of the 7 pre-defined strings actually appear on the screen.
I cannot use element.exists because it takes time for static texts to appear and I do not want to call sleep() because that would slow down the test.
So I tried to use XCTestExpectation but got a problem. XCTest always fails when waitForExpectationsWithTimeout() times out.
To illustrate the problem I wrote a simple test program:
func testExample() {
let element = XCUIApplication().staticTexts["Email"]
let gotText = haveElement(element)
print("Got text: \(gotText)")
}
func haveElement(element: XCUIElement) -> Bool{
var elementExists = true
let expectation = self.expectationForPredicate(
NSPredicate(format: "exists == true"),
evaluatedWithObject: element,
handler: nil)
self.waitForExpectationsWithTimeout(NSTimeInterval(5)) { error in
elementExists = error == nil
}
return elementExists
}
The test always fails with
Assertion Failure: Asynchronous wait failed: Exceeded timeout of 5 seconds, with unfulfilled expectations: "Expect predicate `exists == 1` for object "Email" StaticText".
I also tried
func haveElement(element: XCUIElement) -> Bool {
var elementExists = false
let actionExpectation = self.expectationWithDescription("Expected element")
dispatch_async(dispatch_get_main_queue()) {
while true {
if element.exists {
actionExpectation.fulfill()
elementExists = true
break
} else {
sleep(1)
}
}
}
self.waitForExpectationsWithTimeout(NSTimeInterval(5)) { error in
elementExists = error == nil
}
return elementExists
}
In this case the test always fails with
Stall on main thread.
error.
So the question is how do I check a presence of an asynchronous UI element that may or may not appear within specified time without the test failing on timeout?
Thank you.

You're overcomplicating the test. If you're communicating with a server, there is unnecessary variability in your tests -- my suggestion is to use stubbed network data for each case.
You can get a brief introduction to stubbing network data here:
http://masilotti.com/ui-testing-stub-network-data/
You will eliminate the randomness in the test based on response time of the server as well as the randomness of which string is appearing. Create test cases that respond to each case (i.e, how the app responds when you tap on each individual string)

Related

How to run multiple intervals in tauri with tokio

currently, I am building a small application with Rust and Tauri but I've got the following issue that I need to solve:
Things that I want to do simultaneously:
Checking every 10 sec if a specific application is running
Polling every second data from SharedMemory via winapi
Both of them are working fine but I tried to refactor stuff and now I've got the following problem:
When my frontend sends me an event that the application is ready (or inside .on_page_load() I want to start both processes I mentioned before:
#[tauri::command]
async fn app_ready(window: tauri::Window) {
let is_alive = false; // I think this needs to be a mutex or a mutex that is wrapped around Arc::new()
tokio::join!(
poll_acc_process(&window, &is_alive),
handle_phycics(&window, &is_alive),
);
}
Visual Studio Code is complaining about the following stuff: future cannot be sent between threads safely within impl futures::Future<Output = ()>, the trait std::marker::Send is not implemented for *mut c_void
c_void is the handle of CreateFileMappingW from the winapi crate.
async fn poll_acc_process(window: &Window, is_alive: &bool) {
loop {
window.emit("acc_process", is_alive).unwrap();
tokio::time::sleep(time::Duration::from_secs(10));
}
}
async fn handle_phycics(window: &Window, is_alive: &bool) {
while is_alive {
let s_handle = get_statics_mapped_file(); // _handle represents c_void here
let s_memory = get_statics_mapview_of_file(s_handle);
window
.emit("update_statistics", Statics::new_from_memory(s_memory))
.unwrap();
let p_handle = get_physics_mapped_file(); // _handle represents c_void here
let physics = get_physics_mapview_of_file(p_handle);
window.emit("update_physics", physics).unwrap();
if physics.current_max_rpm != 0 {
let g_handle = get_graphics_mapped_file(); // _handle represents c_void here
let g_memory = get_graphics_mapview_of_file(g_handle);
window
.emit("update_graphics", Graphics::new_from_mem(g_memory))
.unwrap();
}
tokio::time::sleep(time::Duration::from_secs(1)).await;
}
}
Is it possible to solve my problem somehow this way or should I try another approach?

Type mismatch in async method

I have an asynchronous method I'm writing which is supposed to asynchronously query for a port until it finds one, or time out at 5 minutes;
member this.GetPort(): Async<Port> = this._GetPort(DateTime.Now)
member this._GetPort(startTime: DateTime): Async<Port> = async {
match this._TryGetOpenPort() with
| Some(port) -> port
| None -> do
if (DateTime.Now - startTime).TotalMinutes >= 5 then
raise (Exception "Unable to open a port")
else
do! Async.Sleep(100)
let! result = this._GetPort(startTime)
result}
member this._TryGetOpenPort(): Option<Port> =
// etc.
However, I'm getting some strange type inconsistencies in _GetPort; the function says I'm returning a type of Async<unit> instead of Async<Port>.
It's a little unintuitive, but way to make your code work would be this:
member private this.GetPort(startTime: DateTime) =
async {
match this.TryGetOpenPort() with
| Some port ->
return port
| None ->
if (DateTime.Now - startTime).TotalMinutes >= 5 then
raise (Exception "Unable to open a port")
do! Async.Sleep(100)
let! result = this.GetPort(startTime)
return result
}
member private this.TryGetOpenPort() = failwith "yeet" // TODO
I took the liberty to clean up a few things and make the member private, since that seems to be what you're largely going after here with a more detailed internal way to get the port.
The reason why your code wasn't compiling was because you were inconsistent in what you were returning from the computation:
In the case of Some(port) you were missing a return keyword - which is required to lift the value back into an Async<port>
Your if expression where you raise an exception had an else branch but you weren't returning from both. In this case, since you clearly don't wish to return anything and just raise an exception, you can omit the else and make it an imperative program flow just like in non-async code.
The other thing you may wish to consider down the road is if throwing an exception is what you want, or if just returning a Result<T,Err> or an option is the right call. Exceptions aren't inherently bad, but often a lot of F# programming leads to avoiding their use if there's a good way to ascribe meaning to a type that wraps your return value.

Shouldn't I call next on future::stream::Unfold?

I'm calling next multiple times on a Stream returned by this function: https://github.com/sdroege/rtsp-server/blob/96dbaf00a7111c775348430a64d6a60f16d66445/src/listener/message_socket.rs#L43:
pub(crate) fn async_read<R: AsyncRead + Unpin + Send>(
read: R,
max_size: usize,
) -> impl Stream<Item = Result<Message<Body>, ReadError>> + Send {
//...
futures::stream::unfold(Some(state), move |mut state| async move {
//...
})
}
sometimes it works, but sometimes I get:
thread 'main' panicked at 'Unfold must not be polled after it returned `Poll::Ready(None)`', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.13/src/stream/unfold.rs:115:21
The error comes from https://docs.rs/futures-util/0.3.2/src/futures_util/stream/unfold.rs.html#112
but I couldn't understand why. Shouldn't I be free to call next on a Stream, in a loop?
This error is an error for a reason, as it likely means that you are doing something wrong: when a stream returns Poll::Ready(None), it means that the stream is completed (in a similar fashion to Iterator, as has been commented).
However, if you are still sure that this is what you want to do, then you can call stream.fuse() in order to silence the error and simply return Poll::Ready(None) forever.

Kotlin - very frequent data removal and addition to a list causes npe

I've a buffer that is actually ArrayList<Object>.
Happens async:
This buffer list changes very frequently - I mean 15-50 times in single second and the idea is that whenever there's an update, I remove first element by position buffer.removeAt(0) and add new value in the end by buffer.add(new).
At some point I call a function that goes and do calculation with buffer list. What I do is I go through the list - element by element. At some point I run into NPE as the the element has been removed async.
How to solve this NPE? I was thinking of making deep copy, but making deep copy would mean to go through the buffer list and do some data allocation, which basically means that while I do deep copy I can still run into NPE.
How problems like these are solved?
How to solve NPE?
What would be more optimized way as this is gonna consume a lot of memory?
Code:
private fun observeFrequentData() {
frequentData.observe(owner, Observer { data ->
if (accelerationData == null) return#Observer
GlobalScope.launch {
val a = data[0].toDouble()
val b = data[1].toDouble()
val c = a + b
val timestamp = System.currentTimeMillis()
val customObj = CustomObj(c, timestamp)
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
})
}
fun getBuffer() {
val mappedData = buffer.map { it.smth } // NPE, it == null
}
If you are doing lots of removing from 0, and insert at the end. Then ArrayList is probably not the container to use.
you can consider using a LinkedList .
buffer.removeFirst();
and
buffer.add(acceleration);
also note the following comments regarding synchronization.
Note that this implementation is not synchronized. If multiple threads
access a linked list concurrently, and at least one of the threads
modifies the list structurally, it must be synchronized externally. (A
structural modification is any operation that adds or deletes one or
more elements; merely setting the value of an element is not a
structural modification.) This is typically accomplished by
synchronizing on some object that naturally encapsulates the list. If
no such object exists, the list should be "wrapped" using the
Collections.synchronizedList method. This is best done at creation
time, to prevent accidental unsynchronized access to the list:
List list = Collections.synchronizedList(new LinkedList(...));
Using the synchronized keyword on your piece of code as #patrickf suggested.
To take care of performance, instead of making the method call itself synchronized, you can just write the 3 "buffer" related lines of code (size, removeAt and add) in a synchronized block.
Something like;
.
.
.
synchronized {
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
}
})
Hope this helps!

Impossibility to iterate over a Map using Groovy within Jenkins Pipeline

We are trying to iterate over a Map, but without any success. We reduced our issue to this minimal example:
def map = [
'monday': 'mon',
'tuesday': 'tue',
]
If we try to iterate with:
map.each{ k, v -> println "${k}:${v}" }
Only the first entry is output: monday:mon
The alternatives we know of are not even able to enter the loop:
for (e in map)
{
println "key = ${e.key}, value = ${e.value}"
}
or
for (Map.Entry<String, String> e: map.entrySet())
{
println "key = ${e.key}, value = ${e.value}"
}
Are failing, both only showing the exception java.io.NotSerializableException: java.util.LinkedHashMap$Entry. (which could be related to an exception occurring while raising the 'real' exception, preventing us from knowing what happened).
We are using latest stable jenkins (2.19.1) with all plugins up-to-date as of today (2016/10/20).
Is there a solution to iterate over elements in a Map within a Jenkins pipeline Groovy script ?
Its been some time since I played with this, but the best way to iterate through maps (and other containers) was with "classical" for loops, or the "for in". See Bug: Mishandling of binary methods accepting Closure
To your specific problem, most (all?) pipeline DSL commands will add a sequence point, with that I mean its possible to save the state of the pipeline and resume it at a later time. Think of waiting for user input for example, you want to keep this state even through a restart.
The result is that every live instance has to be serialized - but the standard Map iterator is unfortunately not serializable. Original Thread
The best solution I can come up with is defining a Function to convert a Map into a list of serializable MapEntries. The function is not using any pipeline steps, so nothing has to be serializable within it.
#NonCPS
def mapToList(depmap) {
def dlist = []
for (def entry2 in depmap) {
dlist.add(new java.util.AbstractMap.SimpleImmutableEntry(entry2.key, entry2.value))
}
dlist
}
This has to be obviously called for each map you want to iterate, but the upside it, that the body of the loop stays the same.
for (def e in mapToList(map))
{
println "key = ${e.key}, value = ${e.value}"
}
You will have to approve the SimpleImmutableEntry constructor the first time, or quite possibly you could work around that by placing the mapToList function in the workflow library.
Or much simpler
for (def key in map.keySet()) {
println "key = ${key}, value = ${map[key]}"
}

Resources