How to use regular time in ZIO-Test? - zio

While working on zio-spark, we cannot race effects in Tests that are not using zio.Clock. The effects are not interrupted. It's there a way to fix that?
package zio.spark.effect
import zio._
import zio.test.TestAspect.timeout
import zio.test._
object WeirdClocks extends DefaultRunnableSpec {
def wait(seconds: Int): UIO[Int] = UIO(Thread.sleep(seconds * 1000)).as(seconds)
override def spec: ZSpec[TestEnvironment, Any] = suite("clock")(
test("raceTest") {
wait(5).race(wait(15)) map (n => assertTrue(n == 5))
} ## timeout(10.seconds)
)
}

This is expected behavior and doesn't have anything to do with ZIO Test or the Clock.
Interruption in ZIO does not return until the effect has been successfully interrupted, which is an important guarantee for resource safety. In wait(5).race(wait(15)), wait(5) wins the race after 5 seconds. At that point race attempts to interrupt wait(15). However, interruption normally occurs "between" effects and wait(15) is a single block of side effecting code so there is no way to safely interrupt it in general. As a result, interruption suspends until wait(15) has completed execution, but by this time 15 seconds have elapsed and the test has already timed out.
If you don't want to wait for interruption to complete you can use the disconnect operator, for example wait(5).race(wait(15).disconnect). With this change your test will pass as written. You can also use the attemptBlockingInterrupt to direct the ZIO runtime to attempt to interrupt a single block of side effecting code like this by interrupting the underlying thread, though this is relatively heavyweight.

Related

Single threaded asynchronous event loop with `winit`

I'm trying to build an NES emulator using winit, which entails building a game loop which should run exactly 60 times per second.
At first, I used std::thread to create a separate thread where the game loop would run and wait 16 milliseconds before running again. This worked quite well, until I tried to compile the program again targeting WebAssembly. I then found out that both winit::window::Window and winit::event_loop::EventLoopProxy are not Send when targeting Wasm, and that std::thread::spawn panics in Wasm.
After some struggle, I decided to try to do the same thing using task::spawn_local from one of the main asynchronous runtimes. Ultimately, I went with async_std.
I'm not used to asynchronous programming, so I'm not even sure if what I'm trying to do could work.
My idea is to do something like this:
use winit::{window::WindowBuilder, event_loop::EventLoop};
use std::time::Duration;
fn main() {
let event_loop = EventLoop::new();
let _window = WindowBuilder::new()
.build(&event_loop);
async_std::task::spawn_local(async {
// game loop goes here
loop {
// [update game state]
// [update frame buffer]
// [send render event with EventLoopProxy]
async_std::task::sleep(Duration::from_millis(16)).await;
// ^ note: I'll be using a different sleep function with Wasm
}
});
event_loop.run(move |event, _, control_flow| {
control_flow.set_wait();
match event {
// ...
_ => ()
}
});
}
The problem with this approach is that the game loop will never run. If I'm not mistaken, some asynchronous code in the main thread would need to be blocked (by calling .await) for the runtime to poll other Futures, such as the one spawned by the spawn_local function. I can't do this easily, since event_loop.run is not asynchronous.
Having time to await other events shouldn't be a problem, since the control flow is set to wait.
Testing this on native code, nothing inside the game loop ever runs. Testing this on Wasm code (with wasm_timer::Delay as the sleep function), the game loop does run, but at a very low framerate and with long intervals of halting.
Having explained my situation, I would like to ask: is it possible to do what I'm trying to do, and if it is, how would I approach it? I will also accept answers telling me how I could try to do this differently, such as by using web workers.
Thanks in advance!

How to make async function yield on block?

I just started learning asynchronous Rust, so this is propably not a difficult question to answer, however, I am scratching my head here.
I am not trying to run tasks in parallel yet, only trying to get them to run concurrently.
According to the guide at https://rust-lang.github.io/async-book/,
The futures::join macro makes it possible to wait for multiple different futures to complete while executing them all concurrently.
So when I create 2 Futures, I should be able to "await" both of them at once. It also states that
Whereas calling a blocking function in a synchronous method would block the whole thread, blocked Futures will yield control of the thread, allowing other Futures to run.
From what I understand here, if I await multiple Futures with join!, should the first one be blocked, the second one will start running.
So I made a very simple example where I created 2 async fns and tried to join! both, making sure the first one gets blocked. I used a mpsc::channel for the blocking, since the docs stated that thread::sleep() should not be used in async fns and that recv()
will always block the current thread if there is no data available
However, the behavior is not what I expected, as calling the blocking function will not yield control of the thread, allowing the other Future to run, like I would expect from the second quote I provided. Instead, it will just wait untill it is no longer blocked, finish the first Future and only then start the second. Pretty much as if they were synchronous and I would have just called one after the other.
My complete example code:
use std::{thread::{self}, sync::{mpsc::{self, Sender, Receiver}}, time::Duration};
use futures::{executor}; //added futures = "0.3" in cargo.toml dependencies
fn main(){
let fut = main_async();
executor::block_on(fut);
}
async fn main_async(){
let (sender, receiver) = mpsc::channel();
let thread_handle = std::thread::spawn(move || { //this thread is just here so the f1 function gets blocked by something and can later resume
wait_send_function(sender);
});
let f1 = f1(receiver);
let f2 = f2();
futures::join!(f1, f2);
thread_handle.join().unwrap();
}
fn wait_send_function(sender: Sender<i32>){
thread::sleep(Duration::from_millis(5000));
sender.send(1234).unwrap();
}
async fn f1(receiver: Receiver<i32>){
println!("starting f1");
let new_nmbr = receiver.recv().unwrap(); //I would expect f2 to start now, since this is blocking
println!("Received nmbr is: {}", new_nmbr);
}
async fn f2(){
println!("starting f2");
}
And the output is simply:
starting f1
Received nmbr is: 1234
starting f2
My question is what am I missing here, why does f2 only start after f1 is completed and what would I need to do to get the behavior I want (completing f2 first if f1 is blocked and then waiting for f1)?
Maybe the book is a little misleading, but when it refers to "a blocked future", it does not mean in the sense of blocking synchronous code (if that was the case, there would be no problem to use std::thread::sleep()), but rather, it means that the future is waiting to be polled by the executor.
Thus, std::mpsc that blocks the thread will not have the desired effect (definitely not on a single-threaded executor like future's, but it's a bad idea on multi-threaded executors too). Use futures::channel::mpsc and everything will work.

How to make Airflow Sensors succeed given it meets a timeout?

Referring to this, Airflow sensors allow us to check a criteria before running the next tasks. Is there a way to mark successfully terminate the sensor given a user puts a timeout and another flag for it?
In my use case, I am having to check a condition via a sensor but only during a particular time frame post which I would want the DAG / followings tasks to run normally.
You can do that by creating a custom sensor class. You will need to override the poke function and place the logic you wish to set.
For example:
from airflow.sensors.sql import SqlSensor
class MySqlSensor(SqlSensor):
def is_time_frame(self):
# TODO: implement a function that returns True if we want to ignore the sensor
def poke(self, context):
if self.is_time_frame():
return True
super().poke(context)
In this example when sensor is poking it first check the time window. If current time is within the window then the sensor will return True and exit. for any other case the sensor will do it's work - In that specific example running a SQL query until the query returns True.

Background URLSession on watchOS - what is the cycle?

I have a class with the delegates for a URLSession. I intend to use it with a background configuration. I understand that the handlers are called when a certain event happens, such as didFinishDownloadingTo.
However, I do have the handle function on my ExtensionDelegate class:
func handle( _ handleBackgroundTasks:
Set<WKRefreshBackgroundTask>)
// Sent when the system needs to launch the application in the background
to process tasks. Tasks arrive in a set, so loop through and process each one.
for task in handleBackgroundTasks {
switch task {
case let urlSessionTask as WKURLSessionRefreshBackgroundTask:
I wonder: where should I handle the data I receive after a download? At the didFinishDownloadingTo or at that function on my ExtensionDelegate class, on the appropriate case of the switch statement?
Another question on the same cycle: I read everywhere that one must remember to setTaskCompleted() after going through the background tasks. But I read elsewhere that one should not set a task as completed if the scheduled data transfer hasn't finished. How do I check that?
There is a very good explanation here.enter link description here
It worked when I had an array with my WKURLSessionRefreshBackgroundTask. Then, at the end of my didFinishDownloadingTo, I get the task on that array that has the same sessionIdentifier as the current session.configuration.identifier, and set it as complete.

qt connectNotify usage

I have a QT class instance, called C, (C inherits QOBJECT) that sends a signal S.
In my program, other QT classes instances X are created and destroyed when the program runs. These other classes connect and disconnect S, i.e. they run:
connect(C,SIGNAL(S()), this, SLOT(my_func())); // <this> is an instance of X
or
disconnect(C,SIGNAL(S()), this, SLOT(my_func()));
In class C, the calculation of whether S should be emitted (and the data associated to it - not shown here) is rather complicated, so I would like the instance of class C (which emits the signal) to be notified when one(or more) object are connected (listening) to S or when all are disconnected.
I have read about the connectNotify and disconnectNotify functions, but their usage is discouraged. Besides the documentation does not state very clearly if there is a one to one relationship between the number of (dis)connectNotify calls and the number of "listener" to the signal (or can one single connectNotify be called for more than one listener?).
Can I just count positively (count++) the number of connectNotify and negatively (count--) the number of disconnectNotify and just react to non-zero value?
Any better way to do this?
First, I think you've got it right that connectNotify and disconnectNotify can be used for this purpose - each connect event will be counted properly, even if it is a duplicate from the same object.
You can also double check this with QObject::receivers
int QObject::receivers ( const char * signal ) const [protected]
Returns the number of receivers connected to the signal. Since both
slots and signals can be used as receivers for signals, and the same
connections can be made many times, the number of receivers is the
same as the number of connections made from this signal. When calling
this function, you can use the SIGNAL() macro to pass a specific
signal: if (receivers(SIGNAL(valueChanged(QByteArray))) > 0) {
QByteArray data;
get_the_value(&data); // expensive operation
emit valueChanged(data); } As the code snippet above illustrates, you can use this function to avoid emitting a signal that
nobody listens to. Warning: This function violates the object-oriented
principle of modularity. However, it might be useful when you need to
perform expensive initialization only if something is connected to a
signal.
My suggestion would be to write a simple test program. Override connectNotify and disconnectNotify to increment/decrement a counter, but also use receivers to verify that the counter is correct. Try connecting multiple times, disconnecting multiple times, disconnecting even if there is no connection, etc.
Something to be careful of: connect and disconnect are thread-safe; I'm not sure if the matching Notify functions are safe also.
Since Qt 5.0, you can do this more easily with the QObject::isSignalConnected function. Example from the documentation:
static const QMetaMethod valueChangedSignal = QMetaMethod::fromSignal(&MyObject::valueChanged);
if (isSignalConnected(valueChangedSignal)) {
QByteArray data;
data = get_the_value(); // expensive operation
emit valueChanged(data);
}

Resources