I'm struggling to achieve a goal of having multiple async tasks that have general timeout. The trick is that I need to process whatever was received within the timeout.
For example the code below gets the value of both tasks when timeout value is more than two seconds. But once the timeout is decreased (or alternatively the tasks are taking longer), only an TimeoutException is thrown and none the task results are received.
def timeout = 3 // value in seconds
def t1 = task {
Thread.sleep(1000)
println 't1 done'
't1'
}
def t2 = task {
Thread.sleep(2000)
println 't2 done'
't2'
}
def results = whenAllBound( [t1, t2] ) { List l ->
println 'all done ' + l
l.join(', ')
}.get( timeout, SECONDS )
println "results $results"
Instead of get() using join() does not throw the TimeoutException, but then again it does not return the final results either and continues processing the code after the timeout expires.
Either I don't understand the Dataflow structures fully/enough/correctly, I'm trying to use them incorrectly, or both.
Basically what I need is a sync block that triggers several async jobs with a common timeout, returning whatever responses that there were available when timeout happened. Timeout is more of an exceptional case, but does happen occasionally for each of the tasks and should not effect overall processing.
Perhaps this way could work for you:
whenAllBound( [t1, t2] ) { List l ->
println 'all done ' + l
l.join(', ')
}.join( timeout, java.util.concurrent.TimeUnit.SECONDS )
def results = [t1, t2].collect {it.bound ? it.get() : null}
println "results $results"
Related
I'm calling next multiple times on a Stream returned by this function: https://github.com/sdroege/rtsp-server/blob/96dbaf00a7111c775348430a64d6a60f16d66445/src/listener/message_socket.rs#L43:
pub(crate) fn async_read<R: AsyncRead + Unpin + Send>(
read: R,
max_size: usize,
) -> impl Stream<Item = Result<Message<Body>, ReadError>> + Send {
//...
futures::stream::unfold(Some(state), move |mut state| async move {
//...
})
}
sometimes it works, but sometimes I get:
thread 'main' panicked at 'Unfold must not be polled after it returned `Poll::Ready(None)`', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.13/src/stream/unfold.rs:115:21
The error comes from https://docs.rs/futures-util/0.3.2/src/futures_util/stream/unfold.rs.html#112
but I couldn't understand why. Shouldn't I be free to call next on a Stream, in a loop?
This error is an error for a reason, as it likely means that you are doing something wrong: when a stream returns Poll::Ready(None), it means that the stream is completed (in a similar fashion to Iterator, as has been commented).
However, if you are still sure that this is what you want to do, then you can call stream.fuse() in order to silence the error and simply return Poll::Ready(None) forever.
I was recently informed that in
async {
return! async { return "hi" } }
|> Async.RunSynchronously
|> printfn "%s"
the nested Async<'T> (async { return 1 }) would not be sent to the thread pool for evaluation, whereas in
async {
use ms = new MemoryStream [| 0x68uy; 0x69uy |]
use sr = new StreamReader (ms)
return! sr.ReadToEndAsync () |> Async.AwaitTask }
|> Async.RunSynchronously
|> printfn "%s"
the nested Async<'T> (sr.ReadToEndAsync () |> Async.AwaitTask) would be. What is it about an Async<'T> that decides whether it's sent to the thread pool when it's executed in an asynchronous operation like let! or return!? In particular, how would you define one which is sent to the thread pool? What code do you have to include in the async block, or in the lambda passed into Async.FromContinuations?
TL;DR: It's not quite like that. The async itself doesn't "send" anything to the thread pool. All it does is just run continuations until they stop. And if one of those continuations decides to continue on a new thread - well, that's when thread switching happens.
Let's set up a small example to illustrate what happens:
let log str = printfn $"{str}: thread = {Thread.CurrentThread.ManagedThreadId}"
let f = async {
log "1"
let! x = async { log "2"; return 42 }
log "3"
do! Async.Sleep(TimeSpan.FromSeconds(3.0))
log "4"
}
log "starting"
f |> Async.StartImmediate
log "started"
Console.ReadLine()
If you run this script, it will print, starting, then 1, 2, 3, then started, then wait 3 seconds, and then print 4, and all of them except 4 will have the same thread ID. You can see that everything until Async.Sleep is executed synchronously on the same thread, but after that async execution stops and the main program execution continues, printing started and then blocking on ReadLine. By the time Async.Sleep wakes up and wants to continue execution, the original thread is already blocked on ReadLine, so the async computation gets to continue running on a new one.
What's going on here? How does this function?
First, the way the async computation is structured is in "continuation-passing style". It's a technique where every function doesn't return its result to the caller, but calls another function instead, passing the result as its parameter.
Let me illustrate with an example:
// "Normal" style:
let f x = x + 5
let g x = x * 2
printfn "%d" (f (g 3)) // prints 11
// Continuation-passing style:
let f x next = next (x + 5)
let g x next = next (x * 2)
g 3 (fun res1 -> f res1 (fun res2 -> printfn "%d" res2))
This is called "continuation-passing" because the next parameters are called "continuations" - i.e. they're functions that express how the program continues after calling f or g. And yes, this is exactly what Async.FromContinuations means.
Seeming very silly and roundabout on the surface, what this allows us to do is for each function to decide when, how, or even if its continuation happens. For example, our f function from above could be doing something asynchronous instead of just plain returning the result:
let f x next = httpPost "http://calculator.com/add5" x next
Coding it in continuation-passing style would allow such function to not block the current thread while the request to calculator.com is in flight. What's wrong with blocking the thread, you ask? I'll refer you to the original answer that prompted your question in the first place.
Second, when you write those async { ... } blocks, the compiler gives you a little help. It takes what looks like a step-by-step imperative program and "unrolls" it into a series of continuation-passing calls. The "breaking" points for this unfolding are all the constructs that end with a bang - let!, do!, return!.
The above async block, for example, would look somethiing like this (F#-ish pseudocode):
let return42 onDone =
log "2"
onDone 42
let f onDone =
log "1"
return42 (fun x ->
log "3"
Async.Sleep (3 seconds) (fun () ->
log "4"
onDone ()
)
)
Here, you can plainly see that the return42 function simply calls its continuation right away, thus making the whole thing from log "1" to log "3" completely synchronous, whereas the Async.Sleep function doesn't call its continuation right away, instead scheduling it to be run later (in 3 seconds) on the thread pool. That's where the thread switching happens.
And here, finally, lies the answer to your question: in order to have the async computation jump threads, your callback passed to Async.FromContinuations should do anything but call the success continuation immediately.
A few notes for further investigation
The onDone technique in the above example is technically called "monadic bind", and indeed in real F# programs it's represented by the async.Bind method. This answer might also be of help understanding the concept.
The above is a bit of an oversimplification. In reality the async execution is a bit more complicated than that. Internally it uses a technique called "trampoline", which in plain terms is just a loop that runs a single thunk on every turn, but crucially, the running thunk can also "ask" it to run another thunk, and if it does, the loop will do so, and so on, forever, until the next thunk doesn't ask to run another thunk, and then the whole thing finally stops.
I specifically used Async.StartImmediate to start the computation in my example, because Async.StartImmediate will do just what it says on the tin: it will start running the computation immediately, right there. That's why everything ran on the same thread as the main program. There are many alternative starting functions in the Async module. For example, Async.Start will start the computation on the thread pool. The lines from log "1" to log "3" will still all happen synchronously, without thread switching between them, but it will happen on a different thread from log "start" and log "starting". In this case thread switching will happen before the async computation even starts, so it doesn't count.
I want to process a collection of io-bound jobs concurrently, but bound/limit the number of outstanding (actively running) concurrent jobs.
Chunking is an easy way to increase concurrency, but creates bottlenecks if the items take varying amounts of time.
The way I found to do this is has some issues 1). Is there a way do this avoiding the issues below while remaining comparably idiomatic and succinct?
1) use a BlockingCollection (shown below). However, this leads to a solution in which the concurrency here is generated by boundedSize number of "consumer" threads. I'm looking a solution that doesn't require boundedSize number of threads to achieve boundedSize concurrent jobs. (what if boundedSize is very large?). I didn't see how I could take an item, process it, and then signal completion. I can only take items... and since I don't want to rip through the whole list at once, the consumer needs to run it's work Synchronously.
type JobNum = int
let RunConcurrentlyBounded (boundedSize:int) (start : JobNum) (finish : JobNum) (mkJob: JobNum -> Async<Unit>) =
// create a BlockingCollection
use bc = new BlockingCollection<Async<Unit>>(boundedSize)
// put async jobs on BlockingCollection
Async.Start(async {
{ start .. finish }
|> Seq.map mkJob
|> Seq.iter bc.Add
bc.CompleteAdding()
})
// each consumer runs it's job synchronously
let mkConsumer (consumerId:int) = async { for job in bc.GetConsumingEnumerable() do do! job }
// create `boundedSize` number of consumers in parallel
{ 1 .. boundedSize }
|> Seq.map mkConsumer
|> Async.Parallel
|> Async.RunSynchronously
|> ignore
let Test () =
let boundedSize = 15
let start = 1
let finish = 50
let mkJob = (fun jobNum -> async {
printfn "%A STARTED" jobNum
do! Async.Sleep(5000)
printfn "%A COMPLETED" jobNum
})
RunConcurrentlyBounded boundedSize start finish mkJob
I'm aware of TPL and mailbox processors, but thought there might've been something simple & robust, but avoids the high number of thread creation route.
Ideally there would just be one producer thread and one consumer thread; I suspect that BlockingCollection might not be the right concurrency primitive for such a case?
this seems as good as I'm going to get, by using SemaphoreSlim.
I suppose the underlying ThreadPool is really controlling the concurrency here.
let RunConcurrentlySemaphore (boundedSize:int) (start : JobNum) (finish : JobNum) (mkJob: JobNum -> Async<Unit>) =
use ss = new SemaphoreSlim(boundedSize);
{ start .. finish }
|> Seq.map (mkJob >> fun job -> async {
do! Async.AwaitTask(ss.WaitAsync())
try do! job finally ss.Release() |> ignore
})
|> Async.Parallel
|> Async.RunSynchronously
I work on test automation for an app that communicates with the server. The app has 7 pre-defined strings. Depending on the info the server returns, which is not deterministic and depends on external factors, the app places one to three of the seven pre-defined strings in a table view as hittable static texts. The user has a choice which of those strings to tap.
To automate this test I need an asynchronous way to determine in the test code which of the 7 pre-defined strings actually appear on the screen.
I cannot use element.exists because it takes time for static texts to appear and I do not want to call sleep() because that would slow down the test.
So I tried to use XCTestExpectation but got a problem. XCTest always fails when waitForExpectationsWithTimeout() times out.
To illustrate the problem I wrote a simple test program:
func testExample() {
let element = XCUIApplication().staticTexts["Email"]
let gotText = haveElement(element)
print("Got text: \(gotText)")
}
func haveElement(element: XCUIElement) -> Bool{
var elementExists = true
let expectation = self.expectationForPredicate(
NSPredicate(format: "exists == true"),
evaluatedWithObject: element,
handler: nil)
self.waitForExpectationsWithTimeout(NSTimeInterval(5)) { error in
elementExists = error == nil
}
return elementExists
}
The test always fails with
Assertion Failure: Asynchronous wait failed: Exceeded timeout of 5 seconds, with unfulfilled expectations: "Expect predicate `exists == 1` for object "Email" StaticText".
I also tried
func haveElement(element: XCUIElement) -> Bool {
var elementExists = false
let actionExpectation = self.expectationWithDescription("Expected element")
dispatch_async(dispatch_get_main_queue()) {
while true {
if element.exists {
actionExpectation.fulfill()
elementExists = true
break
} else {
sleep(1)
}
}
}
self.waitForExpectationsWithTimeout(NSTimeInterval(5)) { error in
elementExists = error == nil
}
return elementExists
}
In this case the test always fails with
Stall on main thread.
error.
So the question is how do I check a presence of an asynchronous UI element that may or may not appear within specified time without the test failing on timeout?
Thank you.
You're overcomplicating the test. If you're communicating with a server, there is unnecessary variability in your tests -- my suggestion is to use stubbed network data for each case.
You can get a brief introduction to stubbing network data here:
http://masilotti.com/ui-testing-stub-network-data/
You will eliminate the randomness in the test based on response time of the server as well as the randomness of which string is appearing. Create test cases that respond to each case (i.e, how the app responds when you tap on each individual string)
I try to create an agent that updates UI based on user interaction. If user clicks on a button, the GUI should be refreshed. The preparation of model takes a long time, so it is desirable that if user clicks on other button, the preparation is cancelled and the new one is started.
What I have so far:
open System.Threading
type private RefreshMsg =
| RefreshMsg of AsyncReplyChannel<CancellationTokenSource>
type RefresherAgent() =
let mutable cancel : CancellationTokenSource = null
let doSomeModelComputation i =
async {
printfn "start %A" i
do! Async.Sleep(1000)
printfn "middle %A" i
do! Async.Sleep(1000)
printfn "end %A" i
}
let mbox =
MailboxProcessor.Start(fun mbx ->
let rec loop () = async {
let! msg = mbx.Receive()
match msg with
| RefreshMsg(chnl) ->
let cancelSrc = new CancellationTokenSource()
chnl.Reply(cancelSrc)
let update = async {
do! doSomeModelComputation 1
do! doSomeModelComputation 2
//do! updateUI // not important now
}
let cupdate = Async.TryCancelled(update, (fun c -> printfn "refresh cancelled"))
Async.RunSynchronously(cupdate, -1, cancelSrc.Token)
printfn "loop()"
return! loop()
}
loop ())
do
mbox.Error.Add(fun exn -> printfn "Error in refresher: %A" exn)
member x.Refresh() =
if cancel <> null then
// I don't handle whether the previous computation finished
// I just cancel it; might be improved
cancel.Cancel()
cancel.Dispose()
cancel <- mbox.PostAndReply(fun reply -> RefreshMsg(reply))
printfn "x.Refresh end"
//sample
let agent = RefresherAgent()
agent.Refresh()
System.Threading.Thread.Sleep(1500)
agent.Refresh()
I return a CancellationTokenSource for each request and store it in a mutable variable (the x.Refresh() is thread safe, it is called on UI thread).
If Refresh() is called for the first time, the cancellation source is returned. If Refresh() is called for the second time, I call Cancel which should abort the async task that I run through Async.RunSynchronously.
However, an exception is raised. The output from my sample is
x.Refresh end
start 1
middle 1
end 1
refresh cancelled
Error in refresher: System.OperationCanceledException: The operation was canceled.
at Microsoft.FSharp.Control.AsyncBuilderImpl.commit[a](Result`1 res)
Now as I think about this, it might make sense, because the thread on which the agent runs, was interrputed, right? But, how do I achieve the desired behaviour?
I need to cancel async workflow inside the agent, so that the agent can continue consuming new messages. Why do I use the mailbox processor? Cause it is guaranteed that only one thread is trying to create UI model, so I save resources.
Let's suppose I create UI model by downloading data from several web services, that's why I use async call. When user changes a combo and select other option, I want to stop querying the webservices (= cancel the async calls) with old value and want to create new model base od web services call with new value.
Any suggestion that I can use instead of my solution and will solve my problem, is also welcome.
I have difficulties in trying to understand what you want to achieve. But maybe this does not matter - the error just says that the workflow you are executing with RunSynchronously was canceled (RunSynchronously will throw the exception) - so you can wrap this call into a try-match block and just ignore the OC-Exception
a better option might be to refactor your cupdate and to the try-match inside of this - you can even bring the in TryCancelled into it if you catch the OC-Exceptions directly ;)
let update =
async {
try
do! doSomeModelComputation 1
do! doSomeModelComputation 2
with
| :? OperationCanceledException ->
printfn "refresh cancelled"
}
Async.RunSynchronously(update, -1, cancelSrc.Token)
But I still don't get the part why you want this Synchronously