Google Cloud Functions minInstances Pricing - firebase

Is there any pricing information regarding deploying cloud functions with minInstances greater than 0? Also, when I deployed the function with runWith({ minInstances: 1}), the edit function page at console.cloud.google.com does not reflect the change.

You can find pricing info on the Firebase Pricing Page. Note this info to figure out the cost of minimum instances
kept idle refers to idle billable time of instances kept warm using minimum instances.
You can also get a quote when you deploy:
Change your function to use minInstances
export const example = functions.https.onCall( ...
export const example = functions.runWith({minInstances: 1}).https.onCall( ...
Deploy via firebase deploy --only functions
The command line should prompt you with a quote and confirmation input such as:
functions: The following functions have reserved minimum instances. This will reduce the frequency of cold starts but increases the minimum cost. You will be charged for the memory allocation and a fraction of the CPU allocation of instances while they are idle.
With these options, your minimum bill will be $153.75 in a 30-day month
? Would you like to proceed with deployment? (y/N)
Also a tip: I use cloud scheduler to keep my functions warm as it's a small fraction of the cost and works just as well for my use case.

Related

How to get rid of Firebase functions max timeout

Currently, I'm developing app with Firebase(Firestore, Firebase functions, Storage, etc)
I have one pubsub job which runs monthly(This job iterates over thousands of items and each item takes 1-3 seconds to run.)
Therefore, firebase functions timeout in 540 seconds(max limit for firebase functions).
As I researched, i need to move to Cloud App Engine(or something like that) or split into multiple jobs(But not sure how to split it).
Could you share how you achived this problem?
Thank you.
The timeout cannot be increased. You can switch to Cloud Functions Gen 2 that has a 1 hour max timeout instead of 9 minutes. If that is insufficient then it would be best to use something like Cloud Compute.

Firebase Get All Auth Users taking Forever in cloudrun

We're trying to fetch 86k of our firebase users, on local & firebase functions it takes 2 minutes for all, but in cloud run it is taking on average 20 seconds per call (you can only request 1k users per calls according to firebase docs).
Interestingly, get all firebase real time database uses takes 15s, but in cloud run it took 365s.
2022-06-17T00:03:04.986000061Zgrabbed users data from db, total: 86442 in 364.015s
2022-06-17T00:03:05.732000112ZProgress 1000 0.746s
2022-06-17T00:03:15.131999969ZProgress 2000 9.847s
2022-06-17T00:03:39.332999944ZProgress 3000 34.347s
2022-06-17T00:04:03.832999944ZProgress 4000 58.846s
2022-06-17T00:04:28.433000087ZProgress 5000 83.447s
2022-06-17T00:04:51.733000040ZProgress 6000 106.747s
2022-06-17T00:05:58.332000017ZProgress 7000 172.947s
Any thoughts on how to solve this? No special network settings in place on cloud run.
Background Info:
Cloud run instance is using NodeJS 14. 2GB Memory which stays at 8% usage. CPU usage stays around 10%. The user object is relatively small, but across all these users it's about 60-70 MB. In firebase functions, only 256 MB of memory are required to do the fetching.
PS: I've yet to test if region makes a difference, as cloud run is in us-east1 and functions are in us-central1. Will be testing soon.

Firebase Functions returns error of Bandwidth Exhausted

We are using Firebase Functions with a few different HTTP functions .
One of the functions runs via a manual trigger from our website. It then pulls in a lot of data from an external resource and saves it into our Firestore database. Our function resources are Node.js 10, 1 GB of Memory and 540s before it times out.
However, when we have large datasets that we need to pull in, e.g. 5 000 - 10 000 records to write to the database, we start running into issues. We receive an error on large data sets of:
8 RESOURCE_EXHAUSTED: Bandwidth exhausted
The full error on Firebase Functions Health Dashboard logs looks like this:
Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at Http2CallStream.outputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:117:74)
at Http2CallStream.maybeOutputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:156:22)
at Http2CallStream.endCall (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:142:18)
at ClientHttp2Stream.stream.on (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:420:22)
at ClientHttp2Stream.emit (events.js:198:13)
at ClientHttp2Stream.EventEmitter.emit (domain.js:466:23)
Our Firebase project is on the blaze plan and also, on GCP connected to an active billing account.
Upon inspection on GCP, it seems like we are NOT exceeding our WRITES per minute quote, as previously thought, however, we are exceeding our Cloud Build limit. We are also using batched writes when we save data to firestore from within the function, which seems to also make the amount of db writes less. e.g.
We don't use Cloud Build, so I assume that Firebase Functions uses Cloud Build in the back end to run the functions or something, but I can't find any documentation on the matter. We also have a few firestore database functions that run when documents are created. Not sure if that uses Cloud build in the back end or not.
Any idea why this would happen ? Whenever this happens, our function gets terminated with that error which causes us to only import half of our data. The data import works flawlessly with smaller amounts of data.
See our usage here for this particular project:
Cloud Build is used during the deployment of Cloud Functions. If you check this documentation you can see that:
Deployments work by uploading an archive containing your function's source code to a Google Cloud Storage bucket. Once the source code has been uploaded, Cloud Build automatically builds your code into a container image and pushes that image to Container Registry. Cloud Functions uses that image to create the container that executes your function.
This by itself is not enough to justify the charges you are seeing, but if you check the container image documentation it says:
Because the entire build process takes place within the context of your project, the project is subject to the pricing of the included resources:
For Cloud Build pricing, see the Pricing page. This process uses the default instance size of Cloud Build, as these instances are pre-warmed and are available more quickly. Cloud Build does provide a free tier: please review the pricing document for further details.
So with that information in mind, I would make an educated guess that your website is triggering the HTTP function enough times to make Cloud Functions scale up this particular function with new intances of it, which triggers a build process for the container that hosts the function and charges you as a Cloud Build charge. So to keep doing what you doing you are going to have to increase your Cloud Build Quota to meet this demand of your website.
There was a Firestore trigger that was triggering on new records of the same type I was importing.
So in short, I was creating thousands of records in a collection, and for every one of those, the firestore rule (function) triggered, but what I did not know at the time, is that it created a new build process in the background for each firestore trigger that ran, which is not documented anywhere.

Google Cloud Scheduler Run at Set Times Every Minute

I am trying to call an api every minute for ski lift status and check for changes. I am going to store the value of if the lift is open or closed in firebase (Real Time Database) and read to see if value from api is different and only update/ write to that node when it's a different value. Then I can set up a cloud function that will listen for database changes and send push notifications to the list of FCM tokens from that channel. I am not sure if this is the most efficient way, but I was going to set up scheduled functions to call the third party api.
I have been using these docs:
https://firebase.google.com/docs/functions/schedule-functions
I was planning to do something like this:
exports.scheduledFunction = functions.pubsub.schedule('every 5 minutes').onRun((context) => {
CALL MY API IN HERE AND UPDATE DATABASE IF SNAPSHOT BACK IS DIFFERENT
});
I was wondering how would I run only between set times- say 8am-6pm EST. I am struggling to find anything about times to run. Should I just run the function every minute and then pause and resume by checking the time? In which case how does it know to keep checking the time when it is paused?
Firebase scheduled functions use Cloud Scheduler to implement the schedule. It accepts cron style time specifiers to indicate when a job should be run. The full spec for that can be found here. You will have to use ranges of numbers to indicate the valid times and frequency of the schedule. For example, you might use "8-18" in the hour field to limit the hours of execution.

How is Cloud Functions for Firebase memory usage calculated for static variables?

In my index.js file, before I declare the exported functions I have a preprocessing step where I read some files from the cloud and build a couple of large hashmaps. The exported cloud functions use these hashmaps.
Do these hashmaps persist in the memory of the machine even when the function isn't called, and count toward the gb-seconds limitations?
GB-seconds are only billed while a function is running on an allocated instance. When there is no function running, there is no billing, but you cannot be certain that any state from a previous run will be available.

Resources