I am trying to yield an array of saga effects sequentially.
The idea is that yield all([call(foo), call(bar]) will run call(foo) and call(bar) in parallel (or at least in a pseudo-parallel fashion).
However, I want my sagas to run sequentially, meaning that I want to wait for foo to end before lauching bar (this way I can cancel the process).
This array of call is generated dynamically, so I can hard write a series of yield. What is the correct syntax in this case ?
The redux-saga documentation has an example of sequencing sagas.
If you have an array of calls, simply yield these in your saga. For example:
// Some array containing call objects
let calls = [...];
// Call each in order they are present in the array
for (let c of calls) {
yield c
}
Related
I want to do an LLVM compiler for a very old language, PL/M. This has some peculiar features, not least of which is having nested functions with the ability to jump out of an enclosing function. In pseudocode:
toplevel() {
nested() {
if (something)
goto label;
}
nested();
label:
print("finished!");
}
The constraints here are:
you can only jump into the top-level function, luckily
the stack does get unwound (the language does not support destructors, so this is easy)
you do not have to have executed the statement at label before jumping (so the naive setjmp/longjmp method doesn't work).
code at label can be executed normally, i.e. it's not like catch
LLVM has a number of non-local jump mechanisms, such as the exception handling system, but I've never used that. Can this be implemented using LLVM exceptions, or are they not suitable for this? Is there an easier way?
If you want the stack to get unwound, you'll likely want it to be in a separate function, at least a separate LLVM IR function. (The only real exception is if your language does not have a construct like C's "alloca()" and you don't allow calling a nested function by address in which case you could inline it.)
That part of the problem you mentioned, jumping out of an enclosing function, is best handled by having some way for the callee to communicate "how it exited" to the caller, and the caller having a "switch()" on that value. You could stick it in the return value (if it already returns a value, make it a struct of both values), you could add a pointer parameter that it writes to, you could add it a thread-local global variable and fill that in before calling longjmp, or you could use exceptions.
Exceptions, they're complex (I can't describe how to make them work offhand but the docs are here: https://llvm.org/docs/ExceptionHandling.html ) and slow when the exception path is taken, and really intended for exceptional situations, not for normal code. Setjmp/longjmp does the same thing as exceptions except simpler to use and without the performance trade-off when executed, but unfortunately there are miscompiles in LLVM which you need will be the one to fix if you start using them in earnest (see the postscript at the end of the answer).
Those two options cover the ways you can do it without changing the function signature, which may be necessary if your language allows the address to be taken then called later.
If you do need to take the address of nested, then LLVM supports trampolines. See https://llvm.org/docs/LangRef.html#trampoline-intrinsics . Trampolines solve the problem of accessing the local variables of the calling function from the callee, even when the function is called by address.
PS. LLVM miscompiles setjmp/longjmp today. The current model is that a call to setjmp may return twice, and only functions with the returns_twice attribute may return twice. Note that this doesn't affect the whole call stack, only the direct caller of a function that returns twice has to deal with the twice-returning call-- just because function F calls setjmp does not mean that F itself can return twice. So far, so good.
The problem is that in a function with a setjmp, all function calls may themselves call longjmp. I'd say "unless proven otherwise" as with all things in optimizers, but there is no attribute in LLVM doesnotlongjmp or any code within LLVM that attempts to answer the question of whether a function could call longjmp. Adding that would be a good optimization, but it's a separate issue from the miscompile.
If you have code like this pseudo-code:
%entry block:
allocate val
val <- 0
setjmpret <- call setjmp
br i1 setjmpret, %first setjmp return block, %second setjmp return block
%first setjmp return block:
val <- 1;
call foo();
goto after;
%second setjmp return block:
call print(val);
goto after;
%after:
return
The control flow graph shows that is no path from val <- 0 to val <- 1 to print(val). The only path with "print(val)" has "val <- 0" before it therefore constant propagation may turn print(val) into print(0). The problem here is a missing control flow edge from foo() back to the %second setjmp return block. In a function that contains a setjmp, all calls which may call longjmp must have a CFG edge to the second setjmp return block. In LLVM that control flow edge is missing and LLVM miscompiles code because of it.
This problem also manifests in the backend. The first time I heard of this problem it was in the context of the backend losing track of the placement of variables on the stack, and this issue was the underlying root cause.
For the most part setjmp/longjmp seems to work because LLVM isn't usually able to analyze what calling foo() might do and can't perform the optimization. For instance if val was not a fresh allocation but was a pointer, then who's to say that foo() doesn't have access to the same pointer, and then performs "val <- 1" on it? If LLVM can't prove that impossible, that precludes the transform to print(0). Secondly, setjmp/longjmp are just not used often in real code.
I know that in python37 we have a new api asyncio.get_running_loop(), which is easy to use, let us do not need to pass eventloop explicitly when we call a coroutine.
I'm wondering if there's any approach we can use to get the same effect in python36?
# which allows us coding conveniently with this api:
import asyncio
async def test():
print("hello world !")
async def main():
loop = asyncio.get_running_loop()
loop.create_task(test())
asyncio.run(main())
In Python 3.6 you can use asyncio.get_event_loop() for equivalent effect.
According to the documentation, it is equivalent to calling get_event_loop_policy().get_event_loop(), which is in turn documented to return "the currently running event loop" when called from a coroutine.
In other words, when invoked from a coroutine (or from a function invoked by a coroutine), there is no difference between get_event_loop and get_running_loop, both will return the running loop. It is only when no loop is running that get_event_loop() will keep returning the loop associated with the current thread, while get_running_loop() will raise an exception. As long as you are careful to call get_event_loop() while a loop is actually running, it will be equivalent to get_running_loop().
Note that get_event_loop returning the running loop when called from a coroutine is new to Python 3.6 and 3.5.3. Prior to those versions, get_event_loop would always return the event loop associated with the current thread, which could be a different loop from the one that is actually running. This made get_event_loop() fundamentally unreliable and is the reason why old asyncio code would pass the loop argument everywhere. More details here.
As soon as I press "Enter" after I wrote an asynchronous function into a cell, the async function is correctly called, and Excel raises the event xleventCalculationEnded when the calculation is finished.
However, if I press another cell just after I clicked "Enter" , the event xleventCalculationCanceled is raised, and then the async function is called another time ! Is this behavior normal ? Should I return a result via the Excel12(xlAsyncReturn,...) for the first async call , for the second async call or for both ?
In other word, does the xleventCalculationCanceled event implies that I'm not forced to return a result to Excel ? (using the appropriate asyncHandle)
I'm using async functions to delegate intensive computation in another thread and to not block excel during computation. However if the async function is called automatically two times (as it is the case when the user click another cell without waiting for the first call to finish) then the intensive computation are computed two times for the same input (because the first call -cancelled by excel- still live in the delegate thread...) How do you deal with this problem ?
Two calls for the same function - with the same input - is it a bug ?
Many thanks
What you describe is the normal behaviour. Excel cancels and then restarts the async calculations when there is user interaction (and can do so multiple times).
The documentation suggest that:
xleventCalculationEnded will fire directly after xleventCalculationCanceled, and
You can release any resources allocated during the calculation when xleventCalculationEnded fires. I understand that to include any asyncHandle you might have, and thus that you need not return any result based on the handle.
If your long-running function allows cancellation while in flight, you can cancel the work you do. Otherwise, you might do some internal bookkeeping on what function calls are in flight, and prevent doing the work twice yourself that way.
The ractive.set method returns a promise. When performing a simple set operation (single value or map) and then immediately referencing the new value via ractive.get, is it recommended to use the promise? Or is that completely unnecessary?
I've been avoiding the promise and found that I don't need it, but maybe I've just been lucky so far. Here's an example of what I mean:
ractive.set("foo", "bar");
console.log(ractive.get("foo")); // always outputs the correct value "bar"
I'm worried that the set operation is asynchronous and this will become evident on slower machines or if I start using the more advanced features of Ractive.
According to the Ractive docs:
[ractive.set] Returns a Promise that will be called after the set
operation and any transitions are complete.
Based on that, I wonder if the promise is really meant for post-transition work.
Based on that, I wonder if the promise is really meant for
post-transition work.
Exactly. The value update (and the resulting DOM changes per the template) happen synchronously, the promise is meant for asynchronous response to end of transitions.
This is also why the set operation also has a hash map option for the input parameters so multiple sets will be batched in one go:
ractive.set({
foo: 'foo',
bar: 'bar'
}).then( () => {
// this happens asynchronously ***after*** code execution has
// continued below on next event cycle or after transitions complete
});
// data and DOM have been updated as the code continues synchronously here:
console.log( ractive.get() );
The problem
One data source generating data in format {key, value}
Multiple receivers each waiting for different key
Example
Getting data is run in loop. Sometimes I will want to get next value labelled with key by using
Value = MyClass:GetNextValue(Key)
I want my code to stop there until the value is ready (making some sort of future(?) value). I've tried using simple coroutines, but they work only when waiting for any data.
So the question I want to ask is something like How to implement async values in lua using coroutines or similar concept (without threads)?
Side notes
The main processing function will, apart from returning values to waiting consumers, process some of incoming data (say, labeled with special key) itself.
The full usage context should look something like:
-- in loop
ReceiveData()
ProcessSpecialData()
--
-- Called outside the loop:
V = RequestDataWithGivenKey(Key)
How to implement async values
You start by not implementing async values. You implement async functions: you don't get the value back until has been retrieved.
First, your code must be in a Lua coroutine. I'll assume you understand the care and feeding of coroutines. I'll focus on how to implement RequestDataWithGivenKey:
function RequestDataWithGivenKey(key)
local request = FunctionThatStartsAsyncGetting(key)
if(not request:IsComplete()) then
coroutine.yield()
end
--Request is complete. Return the value.
return request:GetReturnedValue()
end
FunctionThatStartsAsyncGetting returns a request back to the function. The request is an object that stores all of the data needs to process the specific request. It represents asking for the value. This should be a C-function that starts the actual async getting.
The request will be either a userdata or an encapsulated Lua table that stores enough information to communicate with the C-code that's doing the async fetching. IsComplete uses the internal request data to see if that request has completed. GetReturnedValue can only be called when IsComplete returns true; it puts the value on the Lua stack, so that this function can return it.
Your external code simply needs to handle the async stuff internally. Between resumes of these Lua coroutines, you'll need to pump whatever async stuff is doing the fetching, if there are outstanding requests.