I am building a GraphQL server using F# with .NET Core. To implement batching and address N+1 select, I am building a data loader. The Facebook's dataloader uses Node.js event loop tick to collect and dispatch batched requests.
However, such mechanism is not available in .NET Core. I know I can implement run/dispatch method from the data loader instance which can be called manually. But, that is something very hard to do from within the resolvers which execute independently. So I need some auto dispatching mechanism to run the batched requests.
Any suggestions on how to achieve it?
This is a great question, which I recently hit myself.
There are a few "data-loader" type libraries for F# and .NET, however if you are also using FSharp.Data.GraphQL then there are fewer solutions that integrate well.
Note that the "Haxl" approach will not work (easily) with FSharp.Data.GraphQL. This is because the Haxl types must be integrated into GraphQL query models, but FSharp.Data.GraphQL only understands sync and async.
The most suitable implementation that I could find is in FSharp.Core.Extensions. This is fairly new library, but it's high quality and Apache 2.0 licensed.
I'm sure there are many ways it can be integrated it into FSharp.Data.GraphQL, however my preferred approach was to put the data-loaders into the root value of the schema. This allows all GraphQL resolvers down the tree to access it.
I think the best way to explain it is to show an example.
Here we have a domain of "People" who can have zero or more "followers", who are also "People". Each person has a globally unique ID. There is significant overlap in the followers between people, so a naive solution may re-fetch the same data repeatedly. Our database layer can fetch many person records in one query, so we would like to leverage that where possible.
You can paste this code into an .fsx file and run it. The dependencies are fetched by Paket.
paket.dependencies
generate_load_scripts: true
source https://www.nuget.org/api/v2
source https://api.nuget.org/v3/index.json
storage: none
framework: net5.0, netstandard2.1
nuget FSharp.Core 5.0.0
nuget FSharp.Data.GraphQL.Server 1.0.7
github Horusiath/fsharp.core.extensions:0ff5753bb6f232e0ef3c446ddcc72345b74174ca
DataLoader.fsx
#load ".paket/load/net50/FSharp.Data.GraphQL.Server.fsx"
#load "paket-files/Horusiath/fsharp.core.extensions/src/FSharp.Core.Extensions/Prolog.fs"
#load "paket-files/Horusiath/fsharp.core.extensions/src/FSharp.Core.Extensions/AsyncExtensions.fs"
type Person =
{
ID : string
Name : string
}
// Mocks a real database access layer
module DB =
// Used to avoid interleaving of printfn calls during async execution
let private logger = MailboxProcessor.Start (fun inbox -> async {
while true do
let! message = inbox.Receive()
printfn "DB: %s" message
})
let private log x =
logger.Post(x)
// Our data-set
let private people =
[
{ ID = "alice"; Name = "Alice" }, [ "bob"; "charlie"; "david"; "fred" ]
{ ID = "bob"; Name = "Bob" }, [ "charlie"; "david"; "emily" ]
{ ID = "charlie"; Name = "Charlie" }, [ "david" ]
{ ID = "david"; Name = "David" }, [ "emily"; "fred" ]
{ ID = "emily"; Name = "Emily" }, [ "fred" ]
{ ID = "fred"; Name = "Fred" }, []
]
|> Seq.map (fun (p, fs) -> p.ID, (p, fs))
|> Map.ofSeq
let fetchPerson id =
async {
log $"fetchPerson {id}"
match people |> Map.find id with
| (x, _) -> return x
}
let fetchPersonBatch ids =
async {
let idsString = String.concat "; " ids
log $"fetchPersonBatch [ {idsString} ]"
return
people
|> Map.filter (fun k _ -> Set.contains k ids)
|> Map.toSeq
|> Seq.map (snd >> fst)
|> Seq.toList
}
let fetchFollowers id =
async {
log $"fetchFollowers {id}"
match people |> Map.tryFind id with
| Some (_, followerIDs) -> return followerIDs
| _ -> return []
}
// GraphQL type definitions
open FSharp.Core
open FSharp.Data.GraphQL
open FSharp.Data.GraphQL.Types
#nowarn "40"
[<NoComparison>]
type Root =
{
FetchPerson : string -> Async<Person>
FetchFollowers : string -> Async<string list>
}
let rec personType =
Define.Object(
"Person",
fun () -> [
Define.Field("id", ID, fun ctx p -> p.ID)
Define.Field("name", String, fun ctx p -> p.Name)
Define.AsyncField("followers", ListOf personType, fun ctx p -> async {
let root = ctx.Context.RootValue :?> Root
let! followerIDs = root.FetchFollowers p.ID
let! followers =
followerIDs
|> List.map root.FetchPerson
|> Async.Parallel
return Seq.toList followers
})
])
let queryRoot = Define.Object("Query", [
Define.AsyncField(
"person",
personType,
"Fetches a person by ID",
[
Define.Input("id", ID)
],
fun ctx root -> async {
let id = ctx.Arg("id")
return! root.FetchPerson id
})
])
// Construct the schema once to cache it
let schema = Schema(queryRoot)
// Run an example query...
// Here we fetch the followers of the followers of the followers of `alice`
// This query offers many optimization opportunities to the data-loader
let query = """
query Example {
person(id: "alice") {
id
name
followers {
id
name
followers {
id
name
followers {
id
name
}
}
}
}
}
"""
let executor = Executor(schema)
async {
// Construct a data-loader for fetch person requests
let fetchPersonBatchFn (requests : Set<string>) =
async {
let! people =
requests
|> DB.fetchPersonBatch
let responses =
Seq.zip requests people
|> Map.ofSeq
return responses
}
let fetchPersonContext = DataLoader.context ()
let fetchPersonLoader = DataLoader.create fetchPersonContext fetchPersonBatchFn
// Construct a data-loader for fetch follower requests
let fetchFollowersBatchFn (requests : Set<string>) =
async {
let! responses =
requests
|> Seq.map (fun id ->
async {
let! followerIDs = DB.fetchFollowers id
return id, followerIDs
})
|> Async.Parallel
return Map.ofSeq responses
}
let fetchFollowersContext = DataLoader.context ()
let fetchFollowersLoader =
DataLoader.create fetchFollowersContext fetchFollowersBatchFn
let root =
{
FetchPerson = fun id -> fetchPersonLoader.GetAsync(id)
FetchFollowers = fun id -> fetchFollowersLoader.GetAsync(id)
}
// Uncomment this to see how sub-optimal the query is without the data-loader
// let root =
// {
// FetchPerson = DB.fetchPerson
// FetchFollowers = DB.fetchFollowers
// }
// See https://bartoszsypytkowski.com/data-loaders/
do! Async.SwitchToContext fetchPersonContext
do! Async.SwitchToContext fetchFollowersContext
// Execute the query
let! response = executor.AsyncExecute(query, root)
printfn "%A" response
}
|> Async.RunSynchronously
Related
I am getting data from the open weather map API. Currently the data is being retrieved synchronously which is slow. However, the function has to be synchronous as it is part of a library, but it can call an async function. How might I still make concurrent requests to increase performance? A solution that does not use reqwests works, but reqwests is preferred.
fn get_combined_data(open_weather_map_api_url: String, open_weather_map_api_key: String,
coordinates: Vec<String>, metric: bool) -> Vec<HashMap<String, String>> {
let urls: Vec<String> = get_urls(open_weather_map_api_url, open_weather_map_api_key,
coordinates.get(0).expect("Improper coordinates").to_string() + "," +
coordinates.get(1).expect("Improper coordinates"), metric);
let mut data: Vec<HashMap<String, String>> = Vec::new();
for url in urls {
let request = reqwest::blocking::get(url).expect("Url Get failed").json().expect("json expected");
data.push(request);
}
return data;
}
If your program isn't already async, probably the easiest way might be to use rayon.
use reqwest;
use std::collections::HashMap;
use rayon::prelude::*;
fn get_combined_data(open_weather_map_api_url: String, open_weather_map_api_key: String,
coordinates: Vec<String>, metric: bool) -> Vec<HashMap<String, String>> {
let urls: Vec<String> = get_urls(open_weather_map_api_url, open_weather_map_api_key,
coordinates.get(0).expect("Improper coordinates").to_string() + "," +
coordinates.get(1).expect("Improper coordinates"), metric);
let data : Vec<_>= urls
.par_iter()
.map(|&url| reqwest::blocking::get(url).expect("Url Get failed").json().expect("json expected"))
.collect();
return data;
}
The easiest is probably to use tokios new_current_thread runtime and blocking on the data retreival.
use std::collections::HashMap;
use tokio::runtime;
pub fn collect_data() -> Vec<HashMap<String, String>> {
let rt = runtime::Builder::new_current_thread()
.build()
.expect("couldn't start runtime");
let urls = vec!["https://example.com/a", "https://example.com/b"];
rt.block_on(async move {
let mut data = vec![];
for url in urls {
data.push(async move {
reqwest::get(url)
.await
.expect("Url Get Failed")
.json()
.await
.expect("json expected")
});
}
futures::future::join_all(data).await
})
}
You need an asynchronous runtime in order to call asynchronous functions. The easiest way to get one is to use the #[tokio::main] attribute (which despite the name can be applied to any function):
#[tokio::main]
fn get_combined_data(
open_weather_map_api_url: String,
open_weather_map_api_key: String,
coordinates: Vec<String>,
metric: bool,
) -> Vec<HashMap<String, String>> {
let urls: Vec<String> = get_urls(
open_weather_map_api_url,
open_weather_map_api_key,
coordinates
.get(0)
.expect("Improper coordinates")
.to_string()
+ ","
+ coordinates.get(1).expect("Improper coordinates"),
metric,
);
futures::future::join_all (urls.map (|u| {
async move {
reqwest::get(url)
.await
.expect("Url Get Failed")
.json()
.await
.expect("json expected")
}
})).await
}
I have a Rust application using warp. It implements a RESTful CRUD API. I need each route handler (i.e., the function that ends up being ultimately called by the warp filters) to have access to, and (in most cases) mutate shared application state.
The only way I can get this to compile is by cloning an Arc<Mutex<State>> for each route:
/* internal_state is loaded from a dump file earlier on and is of type `State` */
let state: Arc<Mutex<State>> = Arc::new(Mutex::new(internal_state));
let index_book_state: Arc<Mutex<State>> = state.clone();
let create_book_state: Arc<Mutex<State>> = state.clone();
let read_book_state: Arc<Mutex<State>> = state.clone();
let create_order_state: Arc<Mutex<State>> = state.clone();
let read_order_state: Arc<Mutex<State>> = state.clone();
let update_order_state: Arc<Mutex<State>> = state.clone();
let destroy_order_state: Arc<Mutex<State>> = state.clone();
/* define CRUD routes for order books */
let book_prefix = warp::path!("book");
let index_book_route = book_prefix
.and(warp::get())
.and(warp::any().map(move || index_book_state.clone()))
.and_then(handler::index_book_handler);
let create_book_route = book_prefix
.and(warp::post())
.and(warp::body::json())
.and(warp::any().map(move || create_book_state.clone()))
.and_then(handler::create_book_handler);
let read_book_route = warp::path!("book" / String)
.and(warp::get())
.and(warp::any().map(move || read_book_state.clone()))
.and_then(handler::read_book_handler);
/* define CRUD routes for orders */
let create_order_route = warp::path!("book" / String)
.and(warp::post())
.and(warp::body::json())
.and(warp::any().map(move || create_order_state.clone()))
.and_then(handler::create_order_handler);
let read_order_route = warp::path!("book" / String / "order" / String)
.and(warp::get())
.and(warp::any().map(move || read_order_state.clone()))
.and_then(handler::read_order_handler);
let update_order_route = warp::path!("book" / String / "order" / String)
.and(warp::put())
.and(warp::body::json())
.and(warp::any().map(move || update_order_state.clone()))
.and_then(handler::update_order_handler);
let destroy_order_route = warp::path!("book" / String / "order" / String)
.and(warp::delete())
.and(warp::any().map(move || destroy_order_state.clone()))
.and_then(handler::destroy_order_handler);
/* aggregate all of our order book routes */
let book_routes =
index_book_route.or(create_book_route).or(read_book_route);
/* aggregate all of our order routes */
let order_routes = create_order_route
.or(read_order_route)
.or(update_order_route)
.or(destroy_order_route);
/* aggregate all of our routes */
let routes = book_routes.or(order_routes);
I doubt that this is actually correct behaviour (despite compiling and running).
This seems extremely ugly for what is a relatively simple requirement.
Most importantly, inside my route handlers I will need to make calls to async functions, thus requiring the handlers themselves to be marked as async, etc. When I mark the handlers as async, the compiler complains due to futures being unable to be sent across threads.
How can I achieve shared application state while having route handlers themselves be async?
A signature of a route handler (they're all the same):
/* matches routes like POST `http://example.com/[market]/` */
pub async fn create_order_handler(market: String, request: CreateOrderRequest, state: Arc<Mutex<State>>, rpc_endpoint: String) -> Result<impl Reply, Rejection>
You share state via shared ownership (such as an Arc) paired with thread-safe interior mutability (such as Mutex, RwLock, or an atomic):
use std::sync::{Arc, Mutex};
use warp::Filter;
#[tokio::main]
async fn main() {
let state = Arc::new(Mutex::new(0));
let market = warp::path!("market" / String).map({
let state = state.clone();
move |market| {
*state.lock().unwrap() += 1;
format!("Market: {}", market)
}
});
let plaza = warp::path!("plaza" / String).map({
let state = state.clone();
move |plaza| {
let state = *state.lock().unwrap();
format!("Plaza: {} ({})", plaza, state)
}
});
let routes = market.or(plaza);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
% curl 127.0.0.1:3030/market/one
Market: one
% curl 127.0.0.1:3030/plaza/one
Plaza: one (1)
To perform asynchronous work, use Filter::and_then:
use std::{
convert::Infallible,
sync::{Arc, Mutex},
};
use warp::Filter;
#[tokio::main]
async fn main() {
let state = Arc::new(Mutex::new(0));
let market = warp::path!("market" / String).and_then({
let state = state.clone();
move |market| {
let state = state.clone();
async move {
*state.lock().unwrap() += 1;
Ok::<_, Infallible>(format!("Market: {}", market))
}
}
});
let plaza = warp::path!("plaza" / String).and_then({
let state = state.clone();
move |plaza| {
let state = state.clone();
async move {
let state = *state.lock().unwrap();
Ok::<_, Infallible>(format!("Plaza: {} ({})", plaza, state))
}
}
});
let routes = market.or(plaza);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
These can even be separate functions:
use std::{
convert::Infallible,
sync::{Arc, Mutex},
};
use warp::Filter;
#[tokio::main]
async fn main() {
let state = Arc::new(Mutex::new(0));
let market = warp::path!("market" / String).and_then({
let state = state.clone();
move |m| market(m, state.clone())
});
let plaza = warp::path!("plaza" / String).and_then({
let state = state.clone();
move |p| plaza(p, state.clone())
});
let routes = market.or(plaza);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
type State = Arc<Mutex<i32>>;
async fn market(market: String, state: State) -> Result<String, Infallible> {
*state.lock().unwrap() += 1;
Ok::<_, Infallible>(format!("Market: {}", market))
}
async fn plaza(plaza: String, state: State) -> Result<String, Infallible> {
let state = *state.lock().unwrap();
Ok::<_, Infallible>(format!("Plaza: {} ({})", plaza, state))
}
There's a second set of clones here because there are two distinct things owning data:
The handler itself (the closure)
The future returned by the closure (the async code)
See also:
Is there another option to share an Arc in multiple closures besides cloning it before each closure?
Dependency Injection in Rust Warp
Is there a way to do validation as part of a filter in Warp?
[dependencies]
warp = "0.3.0"
tokio = { version = "1.2.0", features = ["full"] }
I am running a simple chat app with f#. In the chat when one user types "exit" then I want both clients to finish the chat. Currently I am running in the console, and so read and write are blocking, but I am using a class to wrap the console so there is no async problems.
(In the following code the sendUI and reciveUI are async functions that send and recieve messages over the wire)
type IConnection =
abstract Send : string -> Async<bool>
abstract Recieve : unit -> Async<string>
abstract Connected : bool
abstract Close : unit -> unit
type IOutput =
abstract ClearLine : unit -> unit
abstract ReadLine : ?erase:bool -> string
abstract WriteLine : string -> unit
let sendUI (outputer:#IOutput) (tcpConn: #IConnection) () =
async {
if not tcpConn.Connected then return false
else
let message = outputer.ReadLine(true)
try
match message with
| "exit" -> do! tcpConn.Send "exit" |> Async.Ignore
return false
| _ -> if message.Trim() <> ""
then do! message.Trim() |> tcpConn.Send |> Async.Ignore
outputer.WriteLine("me: " + message)
return true
with
| e -> outputer.WriteLine("log: " + e.Message)
return false
}
let recieveUI (outputer:#IOutput) (tcpConn: #IConnection) () =
async {
if not tcpConn.Connected then return false
else
try
let! response = tcpConn.Recieve()
match response with
| "exit" -> return false
| _ -> outputer.WriteLine("other: " + response)
return true
with
| e -> outputer.WriteLine("error: " + e.Message)
return false
}
let rec loop (cancel:CancellationTokenSource) f =
async {
match! f() with
| false -> cancel.Cancel(true)
| true -> do! loop cancel f
}
let messaging recieve send (outputer: #IOutput) (tcpConn:#IConnection) =
printfn "write: exit to exit"
use cancelSrc = new CancellationTokenSource()
let task =
[ recieve outputer tcpConn
send outputer tcpConn ]
|> List.map (loop cancelSrc)
|> Async.Parallel
|> Async.Ignore
try
Async.RunSynchronously (computation=task, cancellationToken=cancelSrc.Token)
with
| :? OperationCanceledException ->
tcpConn.Close()
let exampleReceive =
{ new IConnection with
member this.Connected = true
member this.Recieve() = async { do! Async.Sleep 1000
return "exit" }
member this.Send(arg1) = async { return true }
member this.Close() = ()
}
let exampleOutputer =
{ new IOutput with
member this.ClearLine() = raise (System.NotImplementedException())
member this.ReadLine(erase) = Console.ReadLine()
member this.WriteLine(arg) = Console.WriteLine(arg) }
[<EntryPoint>]
let main args =
messaging recieveUI sendUI exampleOutputer exampleReceive
0
(I wrapped the console with an object so i wont get weird things on screen: outputer)
When I get "exit" over the wire i return false and so the loop calls cancel so it should also stop the sending messages async computation.
However, when I do this, the sendUI gets stuck:
async {
//do stuff
let message = Console.ReadLine() //BLOCKS! doesn't cancel
//do stuff
}
One fix would be to somehow make Console.ReadLine() an async, however the simple async { return ...} does not work.
I also tried running it as a task and calling Async.AwaitTask, but this does not work either!
I read that one can use Async.FromContinuations but I couldn't figure out how to use it (and what I tried didn't solve it...)
Little help?
EDIT
The reason this doesn't simply work is because the way async computations cancellation work. They check whether to cancel when it reaches a let!/do!/return! etc, and so the solutions above do not work.
EDIT 2
Added runnable code sample
You can wrap the Console.ReadLine in its own async, then call that with Async.RunSynchronously and a CancellationToken. This will allow you to cancel that blocking operation, because it won't be on the same thread as the console itself.
open System
open System.Threading
type ITcpConnection =
abstract member Send: string -> unit
let readLineAsync cancellation =
async {
try
return Some <| Async.RunSynchronously(async { return Console.ReadLine() }, cancellationToken = cancellation)
with | _ ->
return None
}
let receiveUI cancellation (tcpConnection: ITcpConnection) =
let rec loop () =
async {
let! message = readLineAsync cancellation
match message with
| Some msg -> msg |> tcpConnection.Send
| None -> printfn "Chat Session Ended"
return! loop ()
}
loop () |> Async.Start
I am using the yogiben:admin package for meteor and I would like the currently logged in user to be able to see only their own objects from each collection.
How do I integrate subscriptions into the admin? Is it part of autoform?
So I am starting to figure it out.
I cloned the package using the technique described here: https://stackoverflow.com/a/30864393/55124
Now I am looking into the lib/server/publish.coffee file to edit the publication.
Meteor.publishComposite 'adminCollectionDoc', (collection, id) ->
check collection, String
check id, Match.OneOf(String, Mongo.ObjectID)
if Roles.userIsInRole this.userId, ['admin']
find: ->
adminCollectionObject(collection).find(id)
children: AdminConfig?.collections?[collection]?.children or []
else
#ready()
Meteor.publish 'adminUsers', ->
if Roles.userIsInRole #userId, ['admin']
Meteor.users.find()
else
#ready()
Meteor.publish 'adminUser', ->
Meteor.users.find #userId
Meteor.publish 'adminCollectionsCount', ->
handles = []
self = #
_.each AdminTables, (table, name) ->
id = new Mongo.ObjectID
count = 0
table = AdminTables[name]
ready = false
selector = if table.selector then table.selector(self.userId) else {}
handles.push table.collection.find().observeChanges
added: ->
count += 1
ready and self.changed 'adminCollectionsCount', id, {count: count}
removed: ->
count -= 1
ready and self.changed 'adminCollectionsCount', id, {count: count}
ready = true
self.added 'adminCollectionsCount', id, {collection: name, count: count}
self.onStop ->
_.each handles, (handle) -> handle.stop()
self.ready()
Meteor.publish null, ->
Meteor.roles.find({})
But it turns out, it is as simple as changing the a line in lib/both/startup.js
return AdminTables[name] = new Tabular.Table({
name: name,
collection: adminCollectionObject(name),
pub: collection.children && adminTablePubName(name),
sub: collection.sub,
columns: columns,
extraFields: collection.extraFields,
dom: adminTablesDom,
selector: collection.selector || function( userId ) {
return { owner: userId };
}
There was an empty selector function and I've added return owner: userId.
I'm trying to use Fsharpx' Async.AwaitObservable inside an async workflow which is started using Async.StartWithContinuations. For some reason, if the cancellation token used to start this workflow is canceled while it is waiting for the observable (but not during other parts of the workflow), the cancellation continuation is never called. However, if I put it inside a use! __ = Async.OnCancel (interruption), then the interruption function does get called. Can someone please clarify why this happens and what the best way is to do this and make sure that one of the continuation functions always gets called?
open System
open System.Reactive.Linq
open FSharp.Control.Observable
open System.Threading
[<EntryPoint>]
let main _ =
let cancellationCapability = new CancellationTokenSource()
let tick = Observable.Interval(TimeSpan.FromSeconds 1.0)
let test = async {
let! __ = Async.AwaitObservable tick
printfn "Got a thing." }
Async.StartWithContinuations(test,
(fun () -> printfn "Finished"),
(fun exn -> printfn "Error!"),
(fun exn -> printfn "Canceled!"),
cancellationCapability.Token)
Thread.Sleep 100
printfn "Cancelling..."
cancellationCapability.Cancel()
Console.ReadLine() |> ignore
0 // return an integer exit code
It seems to me as well that it's a problem in how AwaitObservable is implemented. Good luck on fixing that.
That said, one workaround that you can use on your client side code is wrapping the AwaitObservable in a Task:
async {
let! ct = Async.CancellationToken
let! __ =
Async.StartAsTask(Async.AwaitObservable tick, cancellationToken = ct)
|> Async.AwaitTask
printfn "Got a thing."
}
Not ideal, but works.
It seems that the version of Fsharpx on GitHub already contains a fix (not implemented by me). However the current version on NuGet (1.8.41) has not been updated to include this fix. See the change here.
EDIT 1:
The code on GitHub also has some issues with Observables with replay semantics. I have fixed this for now like so but hopefully there is a cleaner solution. I will submit a PR after I think about whether there is a way to make it simpler.
/// Creates an asynchronous workflow that will be resumed when the
/// specified observables produces a value. The workflow will return
/// the value produced by the observable.
static member AwaitObservable(observable : IObservable<'T1>) =
let removeObj : IDisposable option ref = ref None
let removeLock = new obj()
let setRemover r =
lock removeLock (fun () -> removeObj := Some r)
let remove() =
lock removeLock (fun () ->
match !removeObj with
| Some d -> removeObj := None
d.Dispose()
| None -> ())
synchronize (fun f ->
let workflow =
Async.FromContinuations((fun (cont,econt,ccont) ->
let rec finish cont value =
remove()
f (fun () -> cont value)
setRemover <|
observable.Subscribe
({ new IObserver<_> with
member x.OnNext(v) = finish cont v
member x.OnError(e) = finish econt e
member x.OnCompleted() =
let msg = "Cancelling the workflow, because the Observable awaited using AwaitObservable has completed."
finish ccont (new System.OperationCanceledException(msg)) })
() ))
async {
let! cToken = Async.CancellationToken
let token : CancellationToken = cToken
#if NET40
use registration = token.Register(fun () -> remove())
#else
use registration = token.Register((fun _ -> remove()), null)
#endif
return! workflow
})
static member AwaitObservable(observable : IObservable<'T1>) =
let synchronize f =
let ctx = System.Threading.SynchronizationContext.Current
f (fun g ->
let nctx = System.Threading.SynchronizationContext.Current
if ctx <> null && ctx <> nctx then ctx.Post((fun _ -> g()), null)
else g() )
let continued = ref false
let continuedLock = new obj()
let removeObj : IDisposable option ref = ref None
let removeLock = new obj()
let setRemover r =
lock removeLock (fun () -> removeObj := Some r)
let remove() =
lock removeLock (fun () ->
match !removeObj with
| Some d ->
removeObj := None
d.Dispose()
| None -> ())
synchronize (fun f ->
let workflow =
Async.FromContinuations((fun (cont,econt,ccont) ->
let rec finish cont value =
remove()
f (fun () -> lock continuedLock (fun () ->
if not !continued then
cont value
continued := true))
let observer =
observable.Subscribe
({ new IObserver<_> with
member __.OnNext(v) = finish cont v
member __.OnError(e) = finish econt e
member __.OnCompleted() =
let msg = "Cancelling the workflow, because the Observable awaited using AwaitObservable has completed."
finish ccont (new System.OperationCanceledException(msg)) })
lock continuedLock (fun () -> if not !continued then setRemover observer else observer.Dispose())
() ))
async {
let! cToken = Async.CancellationToken
let token : CancellationToken = cToken
use __ = token.Register((fun _ -> remove()), null)
return! workflow
})
EDIT 2:
Neater fix for the hot observable issue...
let AwaitObservable(observable : IObservable<'T>) = async {
let! token = Async.CancellationToken // capture the current cancellation token
return! Async.FromContinuations(fun (cont, econt, ccont) ->
// start a new mailbox processor which will await the result
Agent.Start((fun (mailbox : Agent<Choice<'T, exn, OperationCanceledException>>) ->
async {
// register a callback with the cancellation token which posts a cancellation message
#if NET40
use __ = token.Register((fun _ ->
mailbox.Post (Choice3Of3 (new OperationCanceledException("The opeartion was cancelled.")))))
#else
use __ = token.Register((fun _ ->
mailbox.Post (Choice3Of3 (new OperationCanceledException("The opeartion was cancelled.")))), null)
#endif
// subscribe to the observable: if an error occurs post an error message and post the result otherwise
use __ =
observable.FirstAsync()
.Catch(fun exn -> mailbox.Post(Choice2Of3 exn) ; Observable.Empty())
.Subscribe(fun result -> mailbox.Post(Choice1Of3 result))
// wait for the first of these messages and call the appropriate continuation function
let! message = mailbox.Receive()
match message with
| Choice1Of3 reply -> cont reply
| Choice2Of3 exn -> econt exn
| Choice3Of3 exn -> ccont exn })) |> ignore) }