How does Meteor's optimistic UI handle server rejections and errors on dependent operations?
If I do :
var item1Id = Items.insert({list: groceriesId, name: "Watercress"}); // op1
var item = Items.findOne({_id: item1Id});
Items.update(item, {$set: {name: "Peppers"}}); // op2
Items.insert({list: groceriesId, name: "Cheese"}); // op3
If op1 fails on the server-side but succeeds on the client-side, what will happen to op2 and op3?
Will they both be rolled back?
If op1 fails then op2 will get rolled back (because it's an update to an object that doesn't exist). op3 will succeed assuming it doesn't also fail atomically.
If you wanted to prevent op3 from happening unless you were sure that op1 had succeeded then you could do it in a callback from op1.
Related
I have an asynchronous method I'm writing which is supposed to asynchronously query for a port until it finds one, or time out at 5 minutes;
member this.GetPort(): Async<Port> = this._GetPort(DateTime.Now)
member this._GetPort(startTime: DateTime): Async<Port> = async {
match this._TryGetOpenPort() with
| Some(port) -> port
| None -> do
if (DateTime.Now - startTime).TotalMinutes >= 5 then
raise (Exception "Unable to open a port")
else
do! Async.Sleep(100)
let! result = this._GetPort(startTime)
result}
member this._TryGetOpenPort(): Option<Port> =
// etc.
However, I'm getting some strange type inconsistencies in _GetPort; the function says I'm returning a type of Async<unit> instead of Async<Port>.
It's a little unintuitive, but way to make your code work would be this:
member private this.GetPort(startTime: DateTime) =
async {
match this.TryGetOpenPort() with
| Some port ->
return port
| None ->
if (DateTime.Now - startTime).TotalMinutes >= 5 then
raise (Exception "Unable to open a port")
do! Async.Sleep(100)
let! result = this.GetPort(startTime)
return result
}
member private this.TryGetOpenPort() = failwith "yeet" // TODO
I took the liberty to clean up a few things and make the member private, since that seems to be what you're largely going after here with a more detailed internal way to get the port.
The reason why your code wasn't compiling was because you were inconsistent in what you were returning from the computation:
In the case of Some(port) you were missing a return keyword - which is required to lift the value back into an Async<port>
Your if expression where you raise an exception had an else branch but you weren't returning from both. In this case, since you clearly don't wish to return anything and just raise an exception, you can omit the else and make it an imperative program flow just like in non-async code.
The other thing you may wish to consider down the road is if throwing an exception is what you want, or if just returning a Result<T,Err> or an option is the right call. Exceptions aren't inherently bad, but often a lot of F# programming leads to avoiding their use if there's a good way to ascribe meaning to a type that wraps your return value.
I'm calling next multiple times on a Stream returned by this function: https://github.com/sdroege/rtsp-server/blob/96dbaf00a7111c775348430a64d6a60f16d66445/src/listener/message_socket.rs#L43:
pub(crate) fn async_read<R: AsyncRead + Unpin + Send>(
read: R,
max_size: usize,
) -> impl Stream<Item = Result<Message<Body>, ReadError>> + Send {
//...
futures::stream::unfold(Some(state), move |mut state| async move {
//...
})
}
sometimes it works, but sometimes I get:
thread 'main' panicked at 'Unfold must not be polled after it returned `Poll::Ready(None)`', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.13/src/stream/unfold.rs:115:21
The error comes from https://docs.rs/futures-util/0.3.2/src/futures_util/stream/unfold.rs.html#112
but I couldn't understand why. Shouldn't I be free to call next on a Stream, in a loop?
This error is an error for a reason, as it likely means that you are doing something wrong: when a stream returns Poll::Ready(None), it means that the stream is completed (in a similar fashion to Iterator, as has been commented).
However, if you are still sure that this is what you want to do, then you can call stream.fuse() in order to silence the error and simply return Poll::Ready(None) forever.
putItem ({ a: "123", b: "456" })
execute a function which takes < 50ms
updateItem (update the item of the first step with b = "789")
Expecting the item to be updated, yet, I found out in the log that updateItem triggered an INSERT item event, which overwrote the item that's being written by putItem. So the end result is {b: "789"}
I wonder how I can ensure the item is fully written in step 1 before executing step 3?
We are trying to iterate over a Map, but without any success. We reduced our issue to this minimal example:
def map = [
'monday': 'mon',
'tuesday': 'tue',
]
If we try to iterate with:
map.each{ k, v -> println "${k}:${v}" }
Only the first entry is output: monday:mon
The alternatives we know of are not even able to enter the loop:
for (e in map)
{
println "key = ${e.key}, value = ${e.value}"
}
or
for (Map.Entry<String, String> e: map.entrySet())
{
println "key = ${e.key}, value = ${e.value}"
}
Are failing, both only showing the exception java.io.NotSerializableException: java.util.LinkedHashMap$Entry. (which could be related to an exception occurring while raising the 'real' exception, preventing us from knowing what happened).
We are using latest stable jenkins (2.19.1) with all plugins up-to-date as of today (2016/10/20).
Is there a solution to iterate over elements in a Map within a Jenkins pipeline Groovy script ?
Its been some time since I played with this, but the best way to iterate through maps (and other containers) was with "classical" for loops, or the "for in". See Bug: Mishandling of binary methods accepting Closure
To your specific problem, most (all?) pipeline DSL commands will add a sequence point, with that I mean its possible to save the state of the pipeline and resume it at a later time. Think of waiting for user input for example, you want to keep this state even through a restart.
The result is that every live instance has to be serialized - but the standard Map iterator is unfortunately not serializable. Original Thread
The best solution I can come up with is defining a Function to convert a Map into a list of serializable MapEntries. The function is not using any pipeline steps, so nothing has to be serializable within it.
#NonCPS
def mapToList(depmap) {
def dlist = []
for (def entry2 in depmap) {
dlist.add(new java.util.AbstractMap.SimpleImmutableEntry(entry2.key, entry2.value))
}
dlist
}
This has to be obviously called for each map you want to iterate, but the upside it, that the body of the loop stays the same.
for (def e in mapToList(map))
{
println "key = ${e.key}, value = ${e.value}"
}
You will have to approve the SimpleImmutableEntry constructor the first time, or quite possibly you could work around that by placing the mapToList function in the workflow library.
Or much simpler
for (def key in map.keySet()) {
println "key = ${key}, value = ${map[key]}"
}
I'm struggling to achieve a goal of having multiple async tasks that have general timeout. The trick is that I need to process whatever was received within the timeout.
For example the code below gets the value of both tasks when timeout value is more than two seconds. But once the timeout is decreased (or alternatively the tasks are taking longer), only an TimeoutException is thrown and none the task results are received.
def timeout = 3 // value in seconds
def t1 = task {
Thread.sleep(1000)
println 't1 done'
't1'
}
def t2 = task {
Thread.sleep(2000)
println 't2 done'
't2'
}
def results = whenAllBound( [t1, t2] ) { List l ->
println 'all done ' + l
l.join(', ')
}.get( timeout, SECONDS )
println "results $results"
Instead of get() using join() does not throw the TimeoutException, but then again it does not return the final results either and continues processing the code after the timeout expires.
Either I don't understand the Dataflow structures fully/enough/correctly, I'm trying to use them incorrectly, or both.
Basically what I need is a sync block that triggers several async jobs with a common timeout, returning whatever responses that there were available when timeout happened. Timeout is more of an exceptional case, but does happen occasionally for each of the tasks and should not effect overall processing.
Perhaps this way could work for you:
whenAllBound( [t1, t2] ) { List l ->
println 'all done ' + l
l.join(', ')
}.join( timeout, java.util.concurrent.TimeUnit.SECONDS )
def results = [t1, t2].collect {it.bound ? it.get() : null}
println "results $results"