Ability to pass some extra data to action serverless function? - hasura

I'm struggling to figure out best approach to following use case:
I am working on a game where user can perform a mutation equipItem. This mutation takes in one input which is itemId. I then set up custom action in hasura to resolve it through a serverless function. My current issue is that within that serverless function I need to perform calculations on user stats and update them accordingly to item they equiped, to do so I need to query my hasura api in order to get full character data.
This results in extra execution time, hence I wanted to ask if there is a better method? Ideally something where I can query my data from hasura server prior to executing this action, so I can send it and all that my serverless function has to do then is just modify it and return it back.
This should happen at insertion time, so events wont work here.

Being able to run a query before calling an action is an open issue and something we're thinking about adding to the roadmap.
https://github.com/hasura/graphql-engine/issues/4268
Currently, your idea of making a query in your action to load the character data sounds like the right thing to do. You shouldn't have to worry about to much latency here, the Hasura response to your serverless function should be fairly fast (especially if you're running in the same region).
(Note: I'm from the Hasura team)

Related

rtk query - how to update chached data without calling updateCachedData

I am working with rtk query and I have 2 different sources of data:
ongoing websocket connection that updates cache with updateCachedData whenever relevant notification is received,
http requests to the api to get an existing data from db
The cached data is store inside of the api > queries > endpointname > data.
When I made a particular api call I want to update the cached data with that result. updateCachedData is not available for the query.mutation so I am not sure how this can be achieved.
Also, should I keep the copy of that cached data inside of the normal slice?
Thanks
I've tried researching the subject but its unclear. Some people state it's a good idea to keep the copy of that data inside of the normal slice and update it there but then this data will be there indefinitely.
Hmm, that sounds like here you are trying to use a mutation to keep data around for longer - and mutations are just not meant for that. A mutation is meant to make a change on the server, not more.
So I would assume that you are just using a mutation for something that should rather be a query here - if you want to trigger it a little bit later, maybe also a lazy query or a query with a skip option.

Firestore : Maintaining the count of a collection. Trigger function vs transaction

Let's say I have a collection called persons and another collection called cities with a field population. When a Person is created in a City, I would like to increment the population field in the corresponding city.
I have two options.
Create a onCreate trigger function. Find the city document and increment using FieldValue.increment(1).
Create an HTTPS callable cloud function to create the person. The cloud function executes a transaction in which the person is created and the population is incremented.
The first one is simpler and I am using it right now. But, I am wondering if there could be cases where the onCreate is not called due to some glitch...
I am thinking of moving to the second option. I am wondering if there are any disadvantages. Does HTTPS callable function cost more?
The only problem I see with the HTTPS callables would be that if something fails you would need to handle that on your client side. That would be (at least for me) a little bit to much logic for the client side.
What I can recommend you after almost 4 years experience with exactly that problem is a solution with a virtual queue. I had a long dicussion on that theme here and even with the Firebase ppl on the last in person Google IO and Firebase Summit.
Our problem was that there where those glitches and even if they happend sometimes the changes and transaction failed due to too much requests. After trying every offical recommendation like the shard counters etc. we ended up creating a virtual queue where each onCreate adds an entry to just a Firestore or RTD list/collection and another function that runs eaither by crone or another trigger (that doesn't matter). That cloud function handles each entry in the queue one by one and starts again for each of them to awoid timouts and memeroy limits. We made sure one handler/calculation is enought for a single function to handle it.
This method was the only bullet proof one that could handle thousands of new entries in a second without having an issue. The only downside is that it takes more time than an usual trigger because each entries is calculated one by one. If your calculations are smaller you could do them in batches (that is how we started to).

When is it better to call an API to update a component or just change its state by it's reducer?

I have two components,in the first one i have an array of objects I get by calling the API (in y useEffect, only if the array in the store is empty, to avoid unnecessary calls). In the second one, the same array but with buttons to call the API and DELETE or POST a new object to that array in the server. My question is, is it better to create 2 actions in my second component's actions, for making the api updates, and then filter or push that object in my first component's reducer? Or just make the first component to always call the API and update itself?
I don't know if it's better to update the store as mutch as u can without relying on an API call or just an API call for updating will be more smooth.
I would recommend you to update the first component always with API call. If so, you can be sure that data has updated properly in DB via service call and you will always get updated data from DB.
Doing more and more updates at UI and keeping them in store will leave you with dirty data after a while.
It makes your app buggy (You should keep reverting/clearing the data in store in failure scenarios. Which we might miss at some point of time) too.

How to run multiple Firestore Functions Sequentially?

We have 20 functions that must run everyday. Each of these functions do something different based on inputs from the previous function.
We tried calling all the functions in one function, but it hits the timeout error as these 20 functions take more than 9 minutes to execute.
How can we trigger these multiple functions sequentially, or avoid timeout error for one function that executes each of these functions?
There is no configuration or easy way to get this done. You will have to set up a fair amount of code and infrastructure to get this done.
The most straightforward solution involves chaining together calls using pubsub type functions. You can send a message to a pubsub topic that will trigger the next function to run. The payload of the message to send can be the parameters that the function should use to determine how it should operate. If the payload is too big, or some more complex sources of data are required to make that decision, you can use a database to store intermediate data that the next function can query and use.
Since we don't have any more specific details about how your functions actually work, nothing more specific can be said. If you run into problems with a specific detail of this scheme, please post again describing that specifically you're trying to do and what's not working the way you expect.
There is a variant to the Doug solution. At the end of the function, instead of publishing a message into pubsub, simply write a specific log (for example " end").
Then, go to stackdriver logging, search for this specific log trace (turn on advanced filters) and configure a sink into a PubSub topic of this log entry. Thereby, every time that the log is detected, a PubSub message is published with the log content.
Finally, plug your next function on this PubSub topic.
If you need to pass values from function to another one, you can simply add these values in the log trace at the end of the function and parse it at the beginning of the next one.
Chaining functions is not an easy things to do. Things are coming, maybe Google Cloud Next will announce new products for helping you in this task.
If you simply want the functions to execute in order, and you don't need to pass the result of one directly to the next, you could wrap them in a scheduled function (docs) that spaces them out with enough time for each to run.
Sketch below with 3 minute spacing:
exports.myScheduler = functions.pubsub
.schedule('every 3 minutes from 22:00 to 23:00')
.onRun(context => {
let time = // check the time
if (time === '22:00') func1of20();
else if (time === '22:03') func2of20();
// etc. through func20of20()
}
If you do need to pass the results of each function to the next, func1 could store its result in a DB entry, then func2 starts by reading that result, and ends by overwriting with its own so func3 can read when fired 3 minutes later, etc. — though perhaps in this case, the other solutions are more tailored to your needs.

firebase database equivalent of MySQL transaction

I'm seeking something where I can thread through multiple updates to multiple firebase.database.References (before performing a commit) a single object and then commit that at the end and if it is unsuccessful no changes are made to any of my Firebase References.
Does this exist? the firebase.database.Transaction I thought would be similar since it is an atomic update and it does involve a callback which says if it has been committed or not, but the update function, I believe, is only for a single object, and the function doesn't seem to return a transactionId or something I could pass to other firebase.database.Transactionss or something.
UPDATE
This transaction's update seems to return a Transaction which would lend itself to perhaps chaining: https://firebase.google.com/docs/reference/js/firebase.firestore.Transaction
however this is different from the other Transaction:
Firebase Database transactions perform an update to a single location based on the current value of that same location. They explicitly do not work across multiple locations, since that would limit their scalability. Sometimes developers work around this by performing a transaction higher up in their JSON tree (at the first common point of the locations). I'd recommend against that, as that would limit the scalability even further.
The only way to efficiently update multiple locations with one API call, is with a multiple location update. This does however not have reading of the current value built-in.
So if you want to update multiple locations based on their current value, you'll have to perform the read operation in your application code, turn that into a multi-location update, and then use security rules to ensure all of those updates follow your application rules. This is a quite non-trivial approach, so I hardly see it being done in practice. See my answer here for an example: Is the way the Firebase database quickstart handles counts secure?

Resources