Firebase Functions returns error of Bandwidth Exhausted - firebase

We are using Firebase Functions with a few different HTTP functions .
One of the functions runs via a manual trigger from our website. It then pulls in a lot of data from an external resource and saves it into our Firestore database. Our function resources are Node.js 10, 1 GB of Memory and 540s before it times out.
However, when we have large datasets that we need to pull in, e.g. 5 000 - 10 000 records to write to the database, we start running into issues. We receive an error on large data sets of:
8 RESOURCE_EXHAUSTED: Bandwidth exhausted
The full error on Firebase Functions Health Dashboard logs looks like this:
Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at Http2CallStream.outputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:117:74)
at Http2CallStream.maybeOutputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:156:22)
at Http2CallStream.endCall (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:142:18)
at ClientHttp2Stream.stream.on (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:420:22)
at ClientHttp2Stream.emit (events.js:198:13)
at ClientHttp2Stream.EventEmitter.emit (domain.js:466:23)
Our Firebase project is on the blaze plan and also, on GCP connected to an active billing account.
Upon inspection on GCP, it seems like we are NOT exceeding our WRITES per minute quote, as previously thought, however, we are exceeding our Cloud Build limit. We are also using batched writes when we save data to firestore from within the function, which seems to also make the amount of db writes less. e.g.
We don't use Cloud Build, so I assume that Firebase Functions uses Cloud Build in the back end to run the functions or something, but I can't find any documentation on the matter. We also have a few firestore database functions that run when documents are created. Not sure if that uses Cloud build in the back end or not.
Any idea why this would happen ? Whenever this happens, our function gets terminated with that error which causes us to only import half of our data. The data import works flawlessly with smaller amounts of data.
See our usage here for this particular project:

Cloud Build is used during the deployment of Cloud Functions. If you check this documentation you can see that:
Deployments work by uploading an archive containing your function's source code to a Google Cloud Storage bucket. Once the source code has been uploaded, Cloud Build automatically builds your code into a container image and pushes that image to Container Registry. Cloud Functions uses that image to create the container that executes your function.
This by itself is not enough to justify the charges you are seeing, but if you check the container image documentation it says:
Because the entire build process takes place within the context of your project, the project is subject to the pricing of the included resources:
For Cloud Build pricing, see the Pricing page. This process uses the default instance size of Cloud Build, as these instances are pre-warmed and are available more quickly. Cloud Build does provide a free tier: please review the pricing document for further details.
So with that information in mind, I would make an educated guess that your website is triggering the HTTP function enough times to make Cloud Functions scale up this particular function with new intances of it, which triggers a build process for the container that hosts the function and charges you as a Cloud Build charge. So to keep doing what you doing you are going to have to increase your Cloud Build Quota to meet this demand of your website.

There was a Firestore trigger that was triggering on new records of the same type I was importing.
So in short, I was creating thousands of records in a collection, and for every one of those, the firestore rule (function) triggered, but what I did not know at the time, is that it created a new build process in the background for each firestore trigger that ran, which is not documented anywhere.

Related

Cosmos DB Emulator hangs when pumping continuation token, segmented query

I have just added a new feature to an app I'm building. It uses the same working Cosmos/Table storage code that other features use to query and pump results segments from the Cosmos DB Emulator via the Tables API.
The emulator is running with:
/EnableTableEndpoint /PartitionCount=50
This is because I read that the emulator defaults to 5 unlimited containers and/or 25 limited and since this is a Tables API app, the table containers are created as unlimited.
The table being queried is the 6th to be created and contains just 1 document.
It either takes around 30 seconds to run a simple query and "trips" my Too Many Requests error handling/retry in the process, or hangs seemingly forever and no results are returned, the emulator has to be shut down.
My understanding is that with 50 partitions I can make 10 unlimited tables, collections since each is "worth" 5. See documentation.
I have tried with rate limiting on and off, and jacked the RU/s to 10,000 on the table. It always fails to query this one table. The data, including the files on disk, has been cleared many times.
It seems like a bug in the emulator. Note that the "Sorry..." error that I would expect to see upon creation of the 6th unlimited table, as per the docs, is never encountered.
After switching to a real Cosmos DB instance on Azure, this is looking like a problem with my dodgy code.
Confirmed: my dodgy code.
Stand down everyone. As you were.

How many clients are connected to my firestore?

I am working on a flutter app that fetches 341 documents from the firestore, after 2 days of analysis I found out that my read requests are increasing too much. So I made a chart on the stackdriver metrics explorer from which I get to know that my app is just reading 341 docs a single time, it's the firebase console which is increasing my reads.
Now, comes to what are the questions that are bothering me,
1)How reads are considered when we see data on the console and how can I reduce my read requests? Basically there are 341 docs but it is showing more than 600 reads whenever I refresh my console.
2)As you can see in the picture there are two types of document reads 'LOOKUP' and 'QUERY', what's the exact difference between them?
3)I am getting data from the firestore with a single instance and when I open my app the chart shows 1 active client which is cool but in the next 5 minutes, the number of active clients starts to increase.
Can anybody please explain to me why this is happening?
For the last question, I tried to disable all the service accounts and then again opened my app but got the same thing again.
Firestore.instance.collection("Lectures").snapshots(includeMetadataChanges: true).listen((d){
print(d.metadata.isFromCache);//prints false everytime
print(d.documents.length);// 341
print(d.documentChanges.length);//341
});
This is the snippet I am using. When the app starts it runs only once.
I will try to answer your questions:
How reads are considered when we see data on the console and how can I
reduce my read requests? Basically there are 341 docs but it is
showing more than 600 reads whenever I refresh my console.
Reads are considered depending on your how you query your Firestore database in addition to your access to this database from the console so using of the Firebase console will incur reads and even if you leave the console open to do other stuff, when new changes to database occured these changes will incur reads also, automatically.and any document read from the server is going to be billed. It doesn't matter where the read came from. The console should be included in that.
Check this official documentation under the "Manage data" title you can see there is a note : "Note: Read, write, and delete operations performed in the console count towards your Cloud Firestore usage."
Saying that if you think there is an issue with this, you can contact Firebase support directly to have more detailed answers.
However, If you check the free plan of Firebase you can see that you have 50K free reads per day.
A workaround that I found for this (thanks to Dependar Sethi)
Bookmarking the Usage tab of the Firestore page. (So you basically
'Skip' the Data Tab)
Adding a dummy collection in a certain way that ensures it is the
first collection(alphabetically) which gets loaded by default on
the Firestore page.
you can find his full solution here.
Also, you can optimise your queries however you want to retreive only the data that you want using where() method and pagination with Firebase
As you can see in the picture there are two types of document reads
'LOOKUP' and 'QUERY', what's the exact difference between them?
I guess there are no important difference between them but "QUERY" is getting the actual data(when you call data() method) while "LOOKUP" is getting a reference of these data(without calling data() method).
I am getting data from the firestore with a single instance and when I
open my app the chart shows 1 active client which is cool but in the
next 5 minutes, the number of active clients starts to increase.
For this question, considering the metrics that you are choosing in Stackdriver I can see 3 connected clients. and as per the decription of "connected client" metric:
The number of active connections. Each mobile client will have one connection. Each listener in admin SDK will be one connection. Sampled every 60 seconds. After sampling, data is not visible for up to 240 seconds.
So please check: how many mobiles are connected to this instance and how many listeners do you have in your app. The sum of all of them is the actual number of connected clients that you are seeing in Stackdriver.

Using cloud function to denormalize firestore data issue

I am using many cloud function trigger and admin sdk to do multi-path update.I don't want to do too much multi-path update in client cause it will make firestore rules very complex and firestore rules also have document access call limits.So I decide to using cloud function to do most denomorlization stuff.
There is how one of my function work.
cloud function triggered at profiles/{userId}
and i using .get to load multi-path update needed paths at
profilesPaths/{userId}
set writebatch.update on those paths
writebatch.commit()
And I think there have a problem.cloud function is asynchronous right?.So when function are running to step 3 And at the same moment a client delete one of update path from cf already loaded document at profilesPaths/{userId} (already loaded at step 2).and now cloud function loaded document is not the latest version.Will this happend? or i should using transactions to lock those documents?
Yes, Cloud Functions run asynchronous, and possibly in parallel. You should be using transactions to make sure that updates are consistent among all the clients that are trying to modify them concurrently.

Firebase error: TOO_MANY_TRIGGERS

In our Firebase application there is a list with lots of items in Realtime Database. Every create, update and delete operation on single item is processed by Firebase Cloud Function with onWrite trigger (in simplest case this function just counts items). But sometimes there is a need for bulk operation on items without need for individual processing. Let's say we want in single transaction remove all items and reset counters.
Earlier it worked just fine. Due to the limit of 1000 for number of Cloud Functions triggered by a single write (https://firebase.google.com/docs/database/usage/limits), no functions where triggered at all and it was desired outcome.
Now, without any change to application code we have an error
Error: TOO_MANY_TRIGGERS: This request would cause too many functions to be triggered.
Same error appears in client application, Admin API and even when importing json using the web interface. Only option that works for us is processing of items in batches. But it is not transactional and takes up to tens of minutes instead of milliseconds as before.
What options do we have to bypass this error? Optimally this would be some switch to skip function triggering in case of exceeding the limit.
For anybody reading this question post-2018, there is now an option to disable strict enforcement for trigger limits.
Strict validation is enabled by default for write operations that trigger events. Any write operations that trigger more than 1000 Cloud Functions or a single event greater than 1 MB in size will fail and return an error reporting the limit that was hit. This might mean that some Cloud Functions aren't triggered at all if they fail the pre-validation.
If you're performing a larger write operation (for example, deleting your entire database), you might want to disable this validation, as the errors themselves might block the operation.
To turn off strictTriggerValidation, follow these steps:
Get your Database secret from the Service accounts tab of your Project settings in the Firebase console.
Run the following CURL request from your command line:
curl -X PUT -d "false" https://NAMESPACE.firebaseio.com/.settings/strictTriggerValidation/.json?auth\=SECRET
See here for the docs: https://firebase.google.com/docs/database/usage/limits
There is currently no way to prevent triggers from running in special circumstances. The only way around this is to undeploy all your triggers, perform your updates, then deploy all your triggers again.
I would encourage you to file a feature request for this.
I just got this error message in an older, Flutter project that I hadn't touched in quite some time.
[firebase_database/unknown] TOO_MANY_TRIGGERS: This request would cause too many functions to be
triggered.
It turned out that here it was caused by the fact that my Cloud Functions were still set to use Node v8, which was retired in early 2021.
Upgrading the Cloud Functions to use Node v12 (no other changes needed) removed the error message for me.
Turning off strictTriggerValidation is solved my issue.
if you are using firebase tool you can follow these steps.
Turn off strictTriggerValidation for entire project:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id*
If you need to turn off for particular instance:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id* --instance *my_instance_name*
check instances
sudo firebase database:instances:list --project *my_project_id*
Note: windows user please try without sudo
FYR:
Limitations: https://firebase.google.com/docs/database/usage/limits
Firebase CLI Commands: https://firebaseopensource.com/projects/firebase/firebase-tools/

Google Cloud Functions with ECONNRESET errors until I redeploy

I'm using Google Cloud Functions to:
Watch for a new Firebase entry
Download a file that's referenced in the Firebase entry
Generate a thumbnail based on that file.
Upload the thumbnail to the cloud bucket.
Unfortunately I'm getting ECONNRESET errors repeatedly on step 4, and the only way to fix it seems to be to redeploy the function. Any ideas how to further debug this?
Edit:
It seems like many times when this happens, when I try to deploy the function again, it errors, and I have to run the deploy twice. Is something hanging or something?
Update May 9 2017
According to this thread, the google cloud nodejs API developers have made some changes to the defaults that are used when initializing that should solve these ECONNRESET socket issues.
From #stephen++ on github GoogleCloudPlatform/google-cloud-node issue 2254:
We have disabled the forever agent by default in Cloud Function
environments. If you un- and re-install #google-cloud/storage, you
will pick up the new behavior automatically. Thanks for all of the
helpful debugging, everyone!
Older Post Follows:
The solution for me to similar ECONNRESET issues using storage on the cloud functions platform was to use npm:promise-retry, but set up your own retry strategy because the default of 10 retries is too many.
I reported an ECONNRESET issue with cloud functions to Google Support (which you might star if you are also getting ECONNRESET in this context but not in other contexts) and they replied with a "won't fix" that the behavior is expected. Google support said the socket that the API client library uses to connect times out after a few minutes, and then when your cloud function tries to use it again you get ECONNRESET. They recommended adding autoRetry:true when initializing the storage API, but that did not help.
The ECONNRESETs happen on the read side too. In both read and write cases promise-retry helps, and most of the time with only 1 retry needed to reset the bad socket.
So I wrote npm:pipe-to-storage to return a promise to do the retries, check md5, etc., but I haven't tested it with binary data, only text, so I don't know if you could use it with image files. The calls would look like this:
const fs = require('fs');
const storage = require('#google-cloud/storage')();
const pipeToStorage = require('pipe-to-storage')(storage);
const source = ()=>(fs.createReadStream("/path/to/your/file/to/upload"));
pipeToStorage(source, bucketName, fileNameInBucket).then(//do next step);
See also How do I read the contents of a new cloud storage file of type .json from within a cloud function?
You can directly report a bug to the Firebase Support team, or open a support ticket with Firebase to troubleshoot a specific issue.
You may also report a Cloud Functions specific issue in the Google Issue Tracker, which is similar to Stack Overflow in that it is accessible by the public (but specifically used for filing issue reports).

Resources