How to fetch output immediately in python-rq? - flask-socketio

I want to render output from rq immediately after it gets completed.
Currently, I am using Job.fetch() to fetch tasks. But I have to repeatedly check after a certain interval for that.
I am using flask-socketio to render results.
Is there some way I could fetch results from task as soon as it gets finished?

The most probable and used way seemingly is calling Job.fetch() itself after a certain intervals repeatedly using flask-socketio.

Related

Ability to pass some extra data to action serverless function?

I'm struggling to figure out best approach to following use case:
I am working on a game where user can perform a mutation equipItem. This mutation takes in one input which is itemId. I then set up custom action in hasura to resolve it through a serverless function. My current issue is that within that serverless function I need to perform calculations on user stats and update them accordingly to item they equiped, to do so I need to query my hasura api in order to get full character data.
This results in extra execution time, hence I wanted to ask if there is a better method? Ideally something where I can query my data from hasura server prior to executing this action, so I can send it and all that my serverless function has to do then is just modify it and return it back.
This should happen at insertion time, so events wont work here.
Being able to run a query before calling an action is an open issue and something we're thinking about adding to the roadmap.
https://github.com/hasura/graphql-engine/issues/4268
Currently, your idea of making a query in your action to load the character data sounds like the right thing to do. You shouldn't have to worry about to much latency here, the Hasura response to your serverless function should be fairly fast (especially if you're running in the same region).
(Note: I'm from the Hasura team)

How to run multiple Firestore Functions Sequentially?

We have 20 functions that must run everyday. Each of these functions do something different based on inputs from the previous function.
We tried calling all the functions in one function, but it hits the timeout error as these 20 functions take more than 9 minutes to execute.
How can we trigger these multiple functions sequentially, or avoid timeout error for one function that executes each of these functions?
There is no configuration or easy way to get this done. You will have to set up a fair amount of code and infrastructure to get this done.
The most straightforward solution involves chaining together calls using pubsub type functions. You can send a message to a pubsub topic that will trigger the next function to run. The payload of the message to send can be the parameters that the function should use to determine how it should operate. If the payload is too big, or some more complex sources of data are required to make that decision, you can use a database to store intermediate data that the next function can query and use.
Since we don't have any more specific details about how your functions actually work, nothing more specific can be said. If you run into problems with a specific detail of this scheme, please post again describing that specifically you're trying to do and what's not working the way you expect.
There is a variant to the Doug solution. At the end of the function, instead of publishing a message into pubsub, simply write a specific log (for example " end").
Then, go to stackdriver logging, search for this specific log trace (turn on advanced filters) and configure a sink into a PubSub topic of this log entry. Thereby, every time that the log is detected, a PubSub message is published with the log content.
Finally, plug your next function on this PubSub topic.
If you need to pass values from function to another one, you can simply add these values in the log trace at the end of the function and parse it at the beginning of the next one.
Chaining functions is not an easy things to do. Things are coming, maybe Google Cloud Next will announce new products for helping you in this task.
If you simply want the functions to execute in order, and you don't need to pass the result of one directly to the next, you could wrap them in a scheduled function (docs) that spaces them out with enough time for each to run.
Sketch below with 3 minute spacing:
exports.myScheduler = functions.pubsub
.schedule('every 3 minutes from 22:00 to 23:00')
.onRun(context => {
let time = // check the time
if (time === '22:00') func1of20();
else if (time === '22:03') func2of20();
// etc. through func20of20()
}
If you do need to pass the results of each function to the next, func1 could store its result in a DB entry, then func2 starts by reading that result, and ends by overwriting with its own so func3 can read when fired 3 minutes later, etc. — though perhaps in this case, the other solutions are more tailored to your needs.

Is there a way to define a trigger that runs reliably at a datetime specified as a field in the updated/created object?

The question
Is it possible (and if so, how) to make it so when an object's field x (that contains a timestamp) is created/updated a specific trigger will be called at the time specified in x (probably calling a serverless function)?
My Specific context
In my specific instance the object can be seen as a task. I want to make it so when the task is created a serverless function tries to complete the task and if it doesn't succeed it updates the record with the partial results and specifies in a field x when the next attempt should happen.
The attempts should not span at a fixed interval. For example, a task may require 10 successive attempts at approximately every 30 seconds, but then it may need to wait 8 hours.
There currently is no way to (re)trigger a Cloud Function on a node after a certain timespan.
The closest you can get is by regularly scheduling a cron job to run on the list of tasks. For more on that, see this sample in the function-samples repo, this blog post by Abe, and this video where Jen explains them.
I admit I never like using this cron-job approach, since you have to query the list to find the items to process. A while ago, I wrote a more efficient solution that runs a priority queue in a node process. My code was a bit messy, so I'm not quite ready to share it, but it wasn't a lot (<100 lines). So if the cron-trigger approach doesn't work for you, I recommend investigating that direction.

Is context.executeQueryAsync a transactional operation?

Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.
SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.

How to call .xqy page from other .xqy page in Marklogic?

Can I call a .xqy page from another .xqy page in Marklogic ?
There are several ways to execute another .xqy, but the most obvious is probably by using xdmp:invoke. That calls the .xqy, waits for its results and returns them on the spot in your code. You can also call a single function using the combination xdmp:function and xdmp:apply. You could also mess around with xdmp:eval, but that is usually a last resort.
Another strategy could be to use xdmp:http-get, but then the execution runs in a different transaction, so would always commit. You would also need to know the url of the other .xqy, which need some knowledge about whether, and how url are rewritten in the app server (not by default).
Running other .xqy without waiting for results is also possible with xdmp:spawn. Particularly usefull for dispatching heavy load of for instance content processing. Dispatching batches of 100 to 1000 docs is quite common. Keep an eye on the task queue size though..
HTH!

Resources