I have successfully implemented a delete player method for the elm tutorial.
However I cannot get the model to update without manually sending a ForchFetch msg (via clicking a button) that gets the players from the server again. This is my code:
My delete button:
deleteBtn : Player -> Html.Html Msg
deleteBtn player =
let
message =
Msgs.Delete player
in
a
[ class "btn regular", onClick message]
[ i [ class "fa fa-pencil mr1" ] [], text "Delete" ]
My delete messages:
type Msg =
| Delete Player
| OnDeletePlayer (Result Http.Error Player)
| ForceFetch
| OnFetchPlayers (WebData (List Player))
My update function:
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
Msgs.ForceFetch ->
(model, fetchPlayers)
Msgs.OnFetchPlayers response ->
( { model | players = response }, Cmd.none )
Msgs.Delete player ->
(model, deletePlayerCmd player)
Msgs.OnDeletePlayer (Ok player) ->
(updateDeletedPlayerList model player, Cmd.none)
Msgs.OnDeletePlayer (Err player) ->
(model, Cmd.none)
updateDeletedPlayerList : Model -> Player -> Model
updateDeletedPlayerList model deletedPlayer =
let
updatedPlayers = RemoteData.map (List.filter (\p -> deletedPlayer /= p)) model.players
in
{ model | players = updatedPlayers}
deletePlayerCmd : Player -> Cmd Msg
deletePlayerCmd player =
Http.send Msgs.OnDeletePlayer (deletePlayerRequest player)
deletePlayerRequest : Player -> Http.Request Player
deletePlayerRequest player =
Http.request
{ body = Http.emptyBody
, expect = Http.expectJson playerDecoder
, headers = []
, method = "DELETE"
, timeout = Nothing
, url = savePlayerUrl player.id
, withCredentials = False
}
fetchPlayers : Cmd Msg
fetchPlayers =
Http.get fetchPlayersUrl playersDecoder
|> RemoteData.sendRequest
|> Cmd.map Msgs.OnFetchPlayers
And for good measure my model:
type alias Model =
{ players : WebData (List Player)
, route : Route
, newPlayerName : String
, newPlayerId : String
, newPlayerLevel : Int
}
Edit:
I have tried incorporating fetchPlayers in the following manner with no success:
Msgs.OnDeletePlayer (Ok player) ->
(updateDeletedPlayerList model player, fetchPlayers)
You need to update the model in your code if you don't want to fetch it after deleting, so where you have:
Msgs.Delete player ->
(model, deletePlayerCmd player)
you need to change 'model' to a function that will remove the player from it:
Msgs.Delete player ->
(removePlayer player model, deletePlayerCmd player)
...
removePlayer : Player -> Model -> Model
...
Related
I have an async block, within this block I call an async method from an external C# web service client library. This method call returns a data transfer object, or a custom exception of type ApiException. Initially, my function looked like this:
type Msg =
| LoginSuccess
| LoginError
| ApiError
let authUserAsync (client: Client) model =
async {
do! Async.SwitchToThreadPool()
let loginParams = new LoginParamsDto(Username = "some username", Password = "some password")
try
// AuthenticateAsync may throw a custom ApiException.
let! loggedInAuthor = client.AuthenticateAsync loginParams |> Async.AwaitTask
// Do stuff with loggedInAuthor DTO...
return LoginSuccess
with
| :? ApiException as ex ->
let msg =
match ex.StatusCode with
| 404 -> LoginError
| _ -> ApiError
return msg
}
But I found that the ApiException wasn't being caught. Further investigation revealed that the ApiException was in fact the inner exception. So I changed my code to this:
type Msg =
| LoginSuccess
| LoginError
| ApiError
let authUserAsync (client: Client) model =
async {
do! Async.SwitchToThreadPool()
let loginParams = new LoginParamsDto(Username = "some username", Password = "some password")
try
// AuthenticateAsync may throw a custom ApiException.
let! loggedInAuthor = client.AuthenticateAsync loginParams |> Async.AwaitTask
// Do stuff with loggedInAuthor DTO...
return LoginSuccess
with
| baseExn ->
let msg =
match baseExn.InnerException with
| :? ApiException as e ->
match e.StatusCode with
| 404 -> LoginError
| _ -> ApiError
| otherExn -> raise otherExn
return msg
}
Which seems to work. But being new to F# I'm wondering if there is there a more elegant or idiomatic way to catch an inner exception in this kind of situation?
I sometimes use an active pattern to catch the main or the inner exception:
let rec (|NestedApiException|_|) (e: exn) =
match e with
| null -> None
| :? ApiException as e -> Some e
| e -> (|NestedApiException|_|) e.InnerException
and then use it like this:
async {
try
...
with
| NestedApiException e ->
...
| otherExn ->
raise otherExn
}
The solution based on active patterns from Tarmil is very nice and I would use that if I wanted to catch the exception in multiple places across a file or project. However, if I wanted to do this in just one place, then I would probably not want to define a separate active pattern for it.
There is a somewhat nicer way of writing what you have using the when clause in the try ... with expression:
let authUserAsync (client: Client) model =
async {
try
// (...)
with baseExn when (baseExn.InnerException :? ApiException) ->
let e = baseExn :?> ApiException
match e.StatusCode with
| 404 -> return LoginError
| _ -> return ApiError }
This is somewhat repetitive because you have to do a type check using :? and then again a cast using :?>, but it is a bit nicer inline way of doing this.
As an alternative, if you use F# exceptions, you can use the full range of pattern matching available for product types. Error handling code reads a lot more succinctly.
exception TimeoutException
exception ApiException of message: string * inner: exn
try
action ()
with
| ApiException (_, TimeoutException) ->
printfn "Timed out"
| ApiException (_, (:? ApplicationException as e)) ->
printfn "%A" e
I'm trying to serialise JSON values from a serde_json::Map into a SQLite database. I would like to use multiple data types in the Map and have them converted into the appropriate SQLite datatypes.
The map is created in collect_values and is passed to the write_record function. The new_db function creates a rustqlite::Connection context that is passed to write_record
However when I try to insert the values from the map, I get this error
error[E0277]: the trait bound `serde_json::value::Value: rusqlite::types::to_sql::ToSql` is not satisfied
--> src/main.rs:51:9
|
51 | / params![
52 | | &values.get("TEST-KEY-1").unwrap_or(&missing_value),
53 | | &values.get("TEST-KEY-2").unwrap_or(&missing_value)
54 | | ],
| |_________^ the trait `rusqlite::types::to_sql::ToSql` is not implemented for `serde_json::value::Value`
|
= note: required because of the requirements on the impl of `rusqlite::types::to_sql::ToSql` for `&serde_json::value::Value`
= note: required because of the requirements on the impl of `rusqlite::types::to_sql::ToSql` for `&&serde_json::value::Value`
= note: required for the cast to the object type `dyn rusqlite::types::to_sql::ToSql`
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
Do I need to implement the serialiser manually? I thought rusqlite's types module already has this done.
Cargo.toml
[package]
name = "sqlite-min-example"
version = "0.1.0"
authors = ["test"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rusqlite = "0.23.1"
serde_json = "1.0"
serde = {version = "1.0.113", default-features = false}
main.rs
use rusqlite::{params, Connection, Result};
use serde_json::json;
use serde_json::map::Map;
fn main() {
println!("Opening connection");
let conn = new_db();
match conn {
Ok(ctx) => {
let mp = collect_values();
let res = insert_record(&mp, &ctx);
}
Err(e) => {
eprintln!("{}", e);
}
}
}
pub fn new_db() -> Result<rusqlite::Connection> {
let conn = Connection::open("test.db")?;
//let conn = Connection::open_in_memory()?;
conn.execute(
"CREATE TABLE IF NOT EXISTS testdb (
field1 INTEGER,
field2 INTEGER
)",
params![],
)?;
// return the new connection context (object)
Ok(conn)
}
pub fn insert_record(
values: &serde_json::map::Map<std::string::String, serde_json::value::Value>,
conn: &rusqlite::Connection,
) -> Result<()> {
// Insert this if we can't find a value for a key for some reason...
let missing_value = json!("MISSINGVAL");
conn.execute(
"INSERT INTO testdb
(
field1,
field2
)
VALUES (
?1, ?2
)",
params![
&values.get("TEST-KEY-1").unwrap_or(&missing_value),
&values.get("TEST-KEY-2").unwrap_or(&missing_value)
],
)?;
// return any errors that occured
Ok(())
}
pub fn collect_values() -> serde_json::map::Map<std::string::String, serde_json::value::Value> {
// Take in the Modbus context and return a map of keys (field names) and their associated values
let mut map = Map::new();
map.insert("TEST-KEY-1".to_string(), json!(1234));
map.insert("TEST-KEY-2".to_string(), json!(5678));
return map;
}
Add the appropriate feature to rusqlite in your Cargo.toml:
rusqlite = { version = "0.23.1", features = ["serde_json"] }
See also:
How do you enable a Rust "crate feature"?
How to tell what "features" are available per crate?
I have some potentially very long running function, which may sometimes hang up. So, I thought that if I wrap it into an async workflow, then I should be able to cancel it. Here is an FSI example that does not work (but the same behavior happens with the compiled code):
open System.Threading
let mutable counter = 0
/// Emulates an external C# sync function that hung up.
/// Please, don't change it to some F# async stuff because
/// that won't fix that C# method.
let run() =
while true
do
printfn "counter = %A" counter
Thread.Sleep 1000
counter <- counter + 1
let onRunModel() =
let c = new CancellationTokenSource()
let m = async { do run() }
Async.Start (m, c.Token)
c
let tryCancel() =
printfn "Starting..."
let c = onRunModel()
printfn "Waiting..."
Thread.Sleep 5000
printfn "Cancelling..."
c.Cancel()
printfn "Waiting again..."
Thread.Sleep 5000
printfn "Completed."
#time
tryCancel()
#time
If you run it in FSI, then you will see something like that:
Starting...
Waiting...
counter = 0
counter = 1
counter = 2
counter = 3
counter = 4
Cancelling...
Waiting again...
counter = 5
counter = 6
counter = 7
counter = 8
counter = 9
Completed.
Real: 00:00:10.004, CPU: 00:00:00.062, GC gen0: 0, gen1: 0, gen2: 0
counter = 10
counter = 11
counter = 12
counter = 13
counter = 14
counter = 15
counter = 16
which means that it does not stop at all after c.Cancel() is called.
What am I doing wrong and how to make such thing work?
Here is some additional information:
When the code hangs up, it does it in some external sync C# library,
which I have no control of. So checking for cancellation token in
the code that I control is useless. That's why function run()
above was modeled that way.
I don't need any communication of completion and / or progress. It's already done via some messaging system and it is out of scope
of the question.
Basically I just need to kill background work as soon as I "decide" to do so.
You are handing off control to a code segment, which, albeit wrapped in an async block, has no means of checking for the cancellation. Were you to construct your loop directly wrapped in an async, or have it replaced by a recursive async loop, it will work as expected:
let run0 () = // does not cancel
let counter = ref 0
while true do
printfn "(0) counter = %A" !counter
Thread.Sleep 1000
incr counter
let m = async { run0 () }
let run1 () = // cancels
let counter = ref 0
async{
while true do
printfn "(1) counter = %A" !counter
Thread.Sleep 1000
incr counter }
let run2 = // cancels too
let rec aux counter = async {
printfn "(2) counter = %A" counter
Thread.Sleep 1000
return! aux (counter + 1) }
aux 0
printfn "Starting..."
let cts = new CancellationTokenSource()
Async.Start(m, cts.Token)
Async.Start(run1(), cts.Token)
Async.Start(run2, cts.Token)
printfn "Waiting..."
Thread.Sleep 5000
printfn "Cancelling..."
cts.Cancel()
printfn "Waiting again..."
Thread.Sleep 5000
printfn "Completed."
A word of caution though: Nested async calls in F# are automatically checked for cancellation, which is why do! Async.Sleep is preferable. If you are going down the recursive route, be sure to enable tail-recursion via return!. Further reading: Scott W.'s blog on Asynchronous programming, and Async in C# and F# Asynchronous gotchas in C# by Tomas Petricek.
This piece of code was developed to solve a situation where I couldn't get some calls to terminate/timeout. They would just hang. Maybe you can get some ideas that will help you solve your problem.
The interesting part for you would be only the two first functions. The rest is only to demonstrate how I'm using them.
module RobustTcp =
open System
open System.Text
open System.Net.Sockets
open Railway
let private asyncSleep (sleepTime: int) (error: 'a) = async {
do! Async.Sleep sleepTime
return Some error
}
let private asyncWithTimeout asy (timeout: int) (error: 'a) =
Async.Choice [ asy; asyncSleep timeout error ]
let private connectTcpClient (host: string) (port: int) (tcpClient: TcpClient) = async {
let asyncConnect = async {
do! tcpClient.ConnectAsync(host, port) |> Async.AwaitTask
return Some tcpClient.Connected }
match! asyncWithTimeout asyncConnect 1_000 false with
| Some isConnected -> return Ok isConnected
| None -> return Error "unexpected logic error in connectTcpClient"
}
let private writeTcpClient (outBytes: byte[]) (tcpClient: TcpClient) = async {
let asyncWrite = async {
let stream = tcpClient.GetStream()
do! stream.WriteAsync(outBytes, 0, outBytes.Length) |> Async.AwaitTask
do! stream.FlushAsync() |> Async.AwaitTask
return Some (Ok ()) }
match! asyncWithTimeout asyncWrite 10_000 (Error "timeout writing") with
| Some isWrite -> return isWrite
| None -> return Error "unexpected logic error in writeTcpClient"
}
let private readTcpClient (tcpClient: TcpClient) = async {
let asyncRead = async {
let inBytes: byte[] = Array.zeroCreate 1024
let stream = tcpClient.GetStream()
let! byteCount = stream.ReadAsync(inBytes, 0, inBytes.Length) |> Async.AwaitTask
let bytesToReturn = inBytes.[ 0 .. byteCount - 1 ]
return Some (Ok bytesToReturn) }
match! asyncWithTimeout asyncRead 2_000 (Error "timeout reading reply") with
| Some isRead ->
match isRead with
| Ok s -> return Ok s
| Error error -> return Error error
| None -> return Error "unexpected logic error in readTcpClient"
}
let sendReceiveBytes (host: string) (port: int) (bytesToSend: byte[]) = async {
try
use tcpClient = new TcpClient()
match! connectTcpClient host port tcpClient with
| Ok isConnected ->
match isConnected with
| true ->
match! writeTcpClient bytesToSend tcpClient with
| Ok () ->
let! gotData = readTcpClient tcpClient
match gotData with
| Ok result -> return Ok result
| Error error -> return Error error
| Error error -> return Error error
| false -> return Error "Not connected."
| Error error -> return Error error
with
| :? AggregateException as ex ->
(* TODO ? *)
return Error ex.Message
| ex ->
(*
printfn "Exception in getStatus : %s" ex.Message
*)
return Error ex.Message
}
let sendReceiveText (host: string) (port: int) (textToSend: string) (encoding: Encoding) =
encoding.GetBytes textToSend
|> sendReceiveBytes host port
|> Async.map (Result.map encoding.GetString)
I am attempting to create a list of strings which gets elements gradually inserted into asynchronously with the help of a mailbox processor. However I am not getting the desired output.
I have pretty much followed the code from https://fsharpforfunandprofit.com/posts/concurrency-actor-model/
however it does not seem to work as intended in my case. The code I have is as follows:
type TransactionQueue ={
queue : string list
} with
static member UpdateState (msg : string) (tq : TransactionQueue) =
{tq with queue = (msg :: tq.queue)}
static member Agent = MailboxProcessor.Start(fun inbox ->
let rec msgLoop (t : TransactionQueue) =
async{
let! msg = inbox.Receive()
let newT = TransactionQueue.UpdateState msg t
printfn "%A" newT
return! msgLoop newT
}
msgLoop {queue = []}
)
static member Add i = TransactionQueue.Agent.Post i
[<EntryPoint>]
let main argv =
// test in isolation
printfn "welcome to test"
let rec loop () =
let str = Console.ReadLine()
TransactionQueue.Add str
loop ()
loop ()
0
The result i keep getting is a list of the latest input only, the state is not kept. So if I enter "a" then "b" then "c" the queue will only have the value "c" instead of "a";"b";"c"
Any help or pointers would be most appreciated!
Just like C# Properties, your Agent is really a Property and thus behaves like a method with void parameter. That’s why you will get a new agent everytime Agent property is accessed.
In idiomatic F# there are two styles when implementing agents. If you don’t need to have many agent instances, just write a module and encapsule the agent-related stuff inside. Otherwise, OOP style should be used.
Code for style #1
module TransactionQueue =
type private Queue = Queue of string list
let private empty = Queue []
let private update item (Queue items) = Queue (item :: items)
let private agent = MailboxProcessor.Start <| fun inbox ->
let rec msgLoop queue = async {
let! msg = inbox.Receive ()
return! queue |> update msg |> msgLoop
}
msgLoop empty
let add item = agent.Post item
[<EntryPoint>]
let main argv =
// test in isolation
printfn "welcome to test"
let rec loop () =
let str = Console.ReadLine()
TransactionQueue.add str
loop ()
loop ()
Code for style #2
type Queue = Queue of string list with
static member Empty = Queue []
static member Update item (Queue items) =
Queue (item :: items)
type Agent () =
let agent = MailboxProcessor.Start <| fun inbox ->
let rec msgLoop queue = async {
let! msg = inbox.Receive ()
return! queue |> Queue.Update msg |> msgLoop
}
msgLoop Queue.Empty
member this.Add item = agent.Post item
[<EntryPoint>]
let main argv =
// test in isolation
printfn "welcome to test"
let agent = new Agent ()
let rec loop () =
let str = Console.ReadLine()
agent.Add str
loop ()
loop ()
Notice the use of Single-case union types for the Queue type.
I am using the library RxAndroidBle in order to scan devices and then connect to one specific device and read 4 GATT characteristics.
I can read one characteristic (Battery Level) ith this code :
scanSubscription = rxBleClient.scanBleDevices(
new ScanSettings.Builder()
.build()
)
.observeOn(AndroidSchedulers.mainThread())
.doOnNext(
scanResult -> {
if(scanResult.getBleDevice().getName() != null){
if(scanResult.getBleDevice().getName().equals("NODE 1")){
Log.e("BLE SCAN", "SUCCESS");
Log.e("BLE SCAN", scanResult.getBleDevice().getName());
Log.e("BLE SCAN", scanResult.getBleDevice().getMacAddress());
scanSubscription.unsubscribe();
RxBleDevice device = scanResult.getBleDevice();
subscription = device.establishConnection(false) // <-- autoConnect flag
.flatMap(rxBleConnection -> rxBleConnection.readCharacteristic(UUID.fromString("00002a19-0000-1000-8000-00805f9b34fb")))
.subscribe(
characteristicValue -> {
Log.e("Characteristic", characteristicValue[0]+"");
},
throwable -> {
Log.e("Error", throwable.getMessage());
}
);
}
}
}
)
.subscribe();
I can read two by using :
.flatMap(rxBleConnection -> Observable.combineLatest( // use the same connection and combine latest emissions
rxBleConnection.readCharacteristic(aUUID),
rxBleConnection.readCharacteristic(bUUID),
Pair::new
))
But I don't understand how to do that with 4 characteristics for example.
Thank you
The above example is just fine you would only need some data object that would accept more values than Pair. For instance something like:
class ReadResult {
final byte[] aValue;
final byte[] bValue;
final byte[] cValue;
final byte[] dValue;
ReadResult(byte[] aValue, byte[] bValue, byte[] cValue, byte[] dValue) {
this.aValue = aValue;
this.bValue = bValue;
this.cValue = cValue;
this.dValue = dValue;
}
}
And then the example could look like this:
disposable = rxBleClient.scanBleDevices(
new ScanSettings.Builder().build(),
new ScanFilter.Builder().setDeviceName("NODE 1").build() // one can set filtering by name here
)
.take(1) // take only the first result and then the upstream will get unsubscribed (scan will end)
.flatMap(scanResult -> scanResult.getBleDevice().establishConnection(false)) // connect to the first scanned device that matches the filter
.flatMapSingle(rxBleConnection -> Single.zip( // once connected read all needed values
rxBleConnection.readCharacteristic(aUUID),
rxBleConnection.readCharacteristic(bUUID),
rxBleConnection.readCharacteristic(cUUID),
rxBleConnection.readCharacteristic(dUUID),
ReadResult::new // merge them into a single result
))
.take(1) // once the result of all reads is available unsubscribe from the upstream (connection will end)
.subscribe(
readResult -> Log.d("Characteristics", /* print the readResult */),
throwable -> Log.e("Error", throwable.getMessage())
);
Original/Legacy solution for RxAndroidBle based on RxJava1:
subscription = rxBleClient.scanBleDevices(
new ScanSettings.Builder().build(),
new ScanFilter.Builder().setDeviceName("NODE 1").build() // one can set filtering by name here
)
.take(1) // take only the first result and then the upstream will get unsubscribed (scan will end)
.flatMap(scanResult -> scanResult.getBleDevice().establishConnection(false)) // connect to the first scanned device that matches the filter
.flatMap(rxBleConnection -> Observable.combineLatest( // once connected read all needed values
rxBleConnection.readCharacteristic(aUUID),
rxBleConnection.readCharacteristic(bUUID),
rxBleConnection.readCharacteristic(cUUID),
rxBleConnection.readCharacteristic(dUUID),
ReadResult::new // merge them into a single result
))
.take(1) // once the result of all reads is available unsubscribe from the upstream (connection will end)
.subscribe(
readResult -> Log.d("Characteristics", /* print the readResult */),
throwable -> Log.e("Error", throwable.getMessage())
);