We're trying to fetch 86k of our firebase users, on local & firebase functions it takes 2 minutes for all, but in cloud run it is taking on average 20 seconds per call (you can only request 1k users per calls according to firebase docs).
Interestingly, get all firebase real time database uses takes 15s, but in cloud run it took 365s.
2022-06-17T00:03:04.986000061Zgrabbed users data from db, total: 86442 in 364.015s
2022-06-17T00:03:05.732000112ZProgress 1000 0.746s
2022-06-17T00:03:15.131999969ZProgress 2000 9.847s
2022-06-17T00:03:39.332999944ZProgress 3000 34.347s
2022-06-17T00:04:03.832999944ZProgress 4000 58.846s
2022-06-17T00:04:28.433000087ZProgress 5000 83.447s
2022-06-17T00:04:51.733000040ZProgress 6000 106.747s
2022-06-17T00:05:58.332000017ZProgress 7000 172.947s
Any thoughts on how to solve this? No special network settings in place on cloud run.
Background Info:
Cloud run instance is using NodeJS 14. 2GB Memory which stays at 8% usage. CPU usage stays around 10%. The user object is relatively small, but across all these users it's about 60-70 MB. In firebase functions, only 256 MB of memory are required to do the fetching.
PS: I've yet to test if region makes a difference, as cloud run is in us-east1 and functions are in us-central1. Will be testing soon.
Related
Currently, I'm developing app with Firebase(Firestore, Firebase functions, Storage, etc)
I have one pubsub job which runs monthly(This job iterates over thousands of items and each item takes 1-3 seconds to run.)
Therefore, firebase functions timeout in 540 seconds(max limit for firebase functions).
As I researched, i need to move to Cloud App Engine(or something like that) or split into multiple jobs(But not sure how to split it).
Could you share how you achived this problem?
Thank you.
The timeout cannot be increased. You can switch to Cloud Functions Gen 2 that has a 1 hour max timeout instead of 9 minutes. If that is insufficient then it would be best to use something like Cloud Compute.
I am trying to set up the Azure Cosmos DB Emulator to work locally with integration tests but I found that it is very slow.
I am reading a ~1KB JSON document with the container.ReadItemAsync<T> method, and awaiting the answer. I am calling this method in a loop, for 100 times.
The execution time is consistently around 9.5-10 seconds, so one request takes around 100 milliseconds which is very slow compared to the fact that this service is running locally.
Why is this so slow and how can I make it faster?
I expect at most 1 ms / request considering it is all disk I/O.
I tried the following but they didn't work:
Turning Rate Limiting on/off
creating the database/collection with various provisioning settings, it has zero effect on performance (even 100k RU)
creating the db and collection manually vs with the client SDK
"Reset Data" menu in the emulator tray menu
Further information:
The emulator version is 2.14.6.0 (68d4ca59)
I start the emulator from the start menu, but starting it from the command line doesn't change anything
I am using the Microsoft.Azure.Cosmos nuget package, version 3.22.1
my CPU is i7-8565U, but it isn't even fully used while the test is running
my system has 16 GB RAM
my system is running on a fast enough SSD: "NVMe SK hynix BC501 H", but while running the test the SSD usage is between 0 and 2%.
the performance is the same if I increase the document size to 100 KB or even 1 MB.
Creating your CosmosClientOptions with the AllowBulkExecution = true setting can cause this.
the SDK will construct batches and group operations, when the batch is full, it will get dispatched, but if the batch doesn’t fill up, there is a timer that will dispatch it to make sure they complete. This timer currently is 100 milliseconds. So if the batch does not get filled up (for example, you are just sending 50 concurrent operations), then the overall latency might be affected.
Source: Introducing Bulk support in the .NET SDK
I have just added a new feature to an app I'm building. It uses the same working Cosmos/Table storage code that other features use to query and pump results segments from the Cosmos DB Emulator via the Tables API.
The emulator is running with:
/EnableTableEndpoint /PartitionCount=50
This is because I read that the emulator defaults to 5 unlimited containers and/or 25 limited and since this is a Tables API app, the table containers are created as unlimited.
The table being queried is the 6th to be created and contains just 1 document.
It either takes around 30 seconds to run a simple query and "trips" my Too Many Requests error handling/retry in the process, or hangs seemingly forever and no results are returned, the emulator has to be shut down.
My understanding is that with 50 partitions I can make 10 unlimited tables, collections since each is "worth" 5. See documentation.
I have tried with rate limiting on and off, and jacked the RU/s to 10,000 on the table. It always fails to query this one table. The data, including the files on disk, has been cleared many times.
It seems like a bug in the emulator. Note that the "Sorry..." error that I would expect to see upon creation of the 6th unlimited table, as per the docs, is never encountered.
After switching to a real Cosmos DB instance on Azure, this is looking like a problem with my dodgy code.
Confirmed: my dodgy code.
Stand down everyone. As you were.
We are using Firebase Functions with a few different HTTP functions .
One of the functions runs via a manual trigger from our website. It then pulls in a lot of data from an external resource and saves it into our Firestore database. Our function resources are Node.js 10, 1 GB of Memory and 540s before it times out.
However, when we have large datasets that we need to pull in, e.g. 5 000 - 10 000 records to write to the database, we start running into issues. We receive an error on large data sets of:
8 RESOURCE_EXHAUSTED: Bandwidth exhausted
The full error on Firebase Functions Health Dashboard logs looks like this:
Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
at Object.callErrorFromStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client.js:176:52)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:342:141)
at Object.onReceiveStatus (/workspace/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:305:181)
at Http2CallStream.outputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:117:74)
at Http2CallStream.maybeOutputStatus (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:156:22)
at Http2CallStream.endCall (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:142:18)
at ClientHttp2Stream.stream.on (/workspace/node_modules/#grpc/grpc-js/build/src/call-stream.js:420:22)
at ClientHttp2Stream.emit (events.js:198:13)
at ClientHttp2Stream.EventEmitter.emit (domain.js:466:23)
Our Firebase project is on the blaze plan and also, on GCP connected to an active billing account.
Upon inspection on GCP, it seems like we are NOT exceeding our WRITES per minute quote, as previously thought, however, we are exceeding our Cloud Build limit. We are also using batched writes when we save data to firestore from within the function, which seems to also make the amount of db writes less. e.g.
We don't use Cloud Build, so I assume that Firebase Functions uses Cloud Build in the back end to run the functions or something, but I can't find any documentation on the matter. We also have a few firestore database functions that run when documents are created. Not sure if that uses Cloud build in the back end or not.
Any idea why this would happen ? Whenever this happens, our function gets terminated with that error which causes us to only import half of our data. The data import works flawlessly with smaller amounts of data.
See our usage here for this particular project:
Cloud Build is used during the deployment of Cloud Functions. If you check this documentation you can see that:
Deployments work by uploading an archive containing your function's source code to a Google Cloud Storage bucket. Once the source code has been uploaded, Cloud Build automatically builds your code into a container image and pushes that image to Container Registry. Cloud Functions uses that image to create the container that executes your function.
This by itself is not enough to justify the charges you are seeing, but if you check the container image documentation it says:
Because the entire build process takes place within the context of your project, the project is subject to the pricing of the included resources:
For Cloud Build pricing, see the Pricing page. This process uses the default instance size of Cloud Build, as these instances are pre-warmed and are available more quickly. Cloud Build does provide a free tier: please review the pricing document for further details.
So with that information in mind, I would make an educated guess that your website is triggering the HTTP function enough times to make Cloud Functions scale up this particular function with new intances of it, which triggers a build process for the container that hosts the function and charges you as a Cloud Build charge. So to keep doing what you doing you are going to have to increase your Cloud Build Quota to meet this demand of your website.
There was a Firestore trigger that was triggering on new records of the same type I was importing.
So in short, I was creating thousands of records in a collection, and for every one of those, the firestore rule (function) triggered, but what I did not know at the time, is that it created a new build process in the background for each firestore trigger that ran, which is not documented anywhere.
I am using firebase hosting and functions together to convert PDF TO IMAGE.
For that i wrote a function which takes the file and converts the pdf upload it storage, generates a signedURL and returns it to the user.
In hosting i wrote a rewrite which redirects request from /ecc to ecc function.
Setup worked Well. Website is up and running.
But when i asked a small group of 100 people to use it.
Then firebase functions started throwing err 429 which is too many requests.
Is there any limit on executing functions in 100 Seconds ?
This is the log:
Error: quota exceeded (Quota exceeded for quota group 'CPUMilliSecondsNonbillable' and limit 'CPU allocation in function invocations per 100 seconds' of service 'cloudfunctions.googleapis.com' for consumer 'project_number:992192007775'.); to increase quotas, enable billing in your project at https://console.cloud.google.com/billing?project=myProject. Function cannot be executed.
{"serviceContext":{"service":"ecc"},"context":{"reportLocation":{"functionName":"ecc","lineNumber":0,"filePath":"file"}}}
And in Firebase pricing it says that 125K invocation per month is free but with 40K GB and CPU seconds.
So if i have a function with execution time of 10 seconds i can call only 4000 functions per month ? and with that 16 calls per 100 seconds Limit. where are other 121K function?
Updated Information from Quotas
It's really a head banging thing to understand the limits here this is the main quota page
FUNCTION EXECUTION TIME IS 10 SECOND. AND I'LL CALCULATE THINGS ON THAT.
1,2. CPU allocation in function Invocation PerDay : 150K.
first two are same for some reason.
3. CPU allocation in function invocation per 100 seconds: 100k(0.1% CurrentUsage).
means per 100 second i can "invoke" 10k function.
4. Function invocation per 100 seconds:50. Okay Leave The 3rd Point aside.
5. Read Request Per Day: 50 Maybe same as 4th ?
6. Function invocation per day : 5000 I was having a dream until 4th point opened my eyes.
7. Write request per day : 1000 Function deployment maybe ?
8. Write request per 100 seconds: 20 That's Okay.
Now with all that in mind i can say i can execute 50 functions per 100 seconds.
What my error was QUOTA EXCEEDED 'CPU allocation in function invocations per 100 seconds'
but as per the QUOTA the CPU ALLOC IN FUN INV PER 100 SECOND is 100K !!! ??
How it even exceeded the limit ?
as i said my function execution is about 5-10 seconds and with the invocation limit, maximum seconds would be 500.
What is going on ?, Why QUOTA is exceeding ?
what's the difference beetwen Read and Invoking a function ?
From the error it seems that you have exceeded the quota for Firebase Functions (CPU) per month and as you are using free tier then it's only 40K per month.
Using Firebase Functions in Free tier (Spark) is very limited and it's only for Node 8.
To resolve this you need to upgrade to Blaze plan (pay as you go). which also has free tier:
CPU --> Free up to 200K/month
Please, read more about this here