I am using firebase hosting and functions together to convert PDF TO IMAGE.
For that i wrote a function which takes the file and converts the pdf upload it storage, generates a signedURL and returns it to the user.
In hosting i wrote a rewrite which redirects request from /ecc to ecc function.
Setup worked Well. Website is up and running.
But when i asked a small group of 100 people to use it.
Then firebase functions started throwing err 429 which is too many requests.
Is there any limit on executing functions in 100 Seconds ?
This is the log:
Error: quota exceeded (Quota exceeded for quota group 'CPUMilliSecondsNonbillable' and limit 'CPU allocation in function invocations per 100 seconds' of service 'cloudfunctions.googleapis.com' for consumer 'project_number:992192007775'.); to increase quotas, enable billing in your project at https://console.cloud.google.com/billing?project=myProject. Function cannot be executed.
{"serviceContext":{"service":"ecc"},"context":{"reportLocation":{"functionName":"ecc","lineNumber":0,"filePath":"file"}}}
And in Firebase pricing it says that 125K invocation per month is free but with 40K GB and CPU seconds.
So if i have a function with execution time of 10 seconds i can call only 4000 functions per month ? and with that 16 calls per 100 seconds Limit. where are other 121K function?
Updated Information from Quotas
It's really a head banging thing to understand the limits here this is the main quota page
FUNCTION EXECUTION TIME IS 10 SECOND. AND I'LL CALCULATE THINGS ON THAT.
1,2. CPU allocation in function Invocation PerDay : 150K.
first two are same for some reason.
3. CPU allocation in function invocation per 100 seconds: 100k(0.1% CurrentUsage).
means per 100 second i can "invoke" 10k function.
4. Function invocation per 100 seconds:50. Okay Leave The 3rd Point aside.
5. Read Request Per Day: 50 Maybe same as 4th ?
6. Function invocation per day : 5000 I was having a dream until 4th point opened my eyes.
7. Write request per day : 1000 Function deployment maybe ?
8. Write request per 100 seconds: 20 That's Okay.
Now with all that in mind i can say i can execute 50 functions per 100 seconds.
What my error was QUOTA EXCEEDED 'CPU allocation in function invocations per 100 seconds'
but as per the QUOTA the CPU ALLOC IN FUN INV PER 100 SECOND is 100K !!! ??
How it even exceeded the limit ?
as i said my function execution is about 5-10 seconds and with the invocation limit, maximum seconds would be 500.
What is going on ?, Why QUOTA is exceeding ?
what's the difference beetwen Read and Invoking a function ?
From the error it seems that you have exceeded the quota for Firebase Functions (CPU) per month and as you are using free tier then it's only 40K per month.
Using Firebase Functions in Free tier (Spark) is very limited and it's only for Node 8.
To resolve this you need to upgrade to Blaze plan (pay as you go). which also has free tier:
CPU --> Free up to 200K/month
Please, read more about this here
Related
We're trying to fetch 86k of our firebase users, on local & firebase functions it takes 2 minutes for all, but in cloud run it is taking on average 20 seconds per call (you can only request 1k users per calls according to firebase docs).
Interestingly, get all firebase real time database uses takes 15s, but in cloud run it took 365s.
2022-06-17T00:03:04.986000061Zgrabbed users data from db, total: 86442 in 364.015s
2022-06-17T00:03:05.732000112ZProgress 1000 0.746s
2022-06-17T00:03:15.131999969ZProgress 2000 9.847s
2022-06-17T00:03:39.332999944ZProgress 3000 34.347s
2022-06-17T00:04:03.832999944ZProgress 4000 58.846s
2022-06-17T00:04:28.433000087ZProgress 5000 83.447s
2022-06-17T00:04:51.733000040ZProgress 6000 106.747s
2022-06-17T00:05:58.332000017ZProgress 7000 172.947s
Any thoughts on how to solve this? No special network settings in place on cloud run.
Background Info:
Cloud run instance is using NodeJS 14. 2GB Memory which stays at 8% usage. CPU usage stays around 10%. The user object is relatively small, but across all these users it's about 60-70 MB. In firebase functions, only 256 MB of memory are required to do the fetching.
PS: I've yet to test if region makes a difference, as cloud run is in us-east1 and functions are in us-central1. Will be testing soon.
Is there any pricing information regarding deploying cloud functions with minInstances greater than 0? Also, when I deployed the function with runWith({ minInstances: 1}), the edit function page at console.cloud.google.com does not reflect the change.
You can find pricing info on the Firebase Pricing Page. Note this info to figure out the cost of minimum instances
kept idle refers to idle billable time of instances kept warm using minimum instances.
You can also get a quote when you deploy:
Change your function to use minInstances
export const example = functions.https.onCall( ...
export const example = functions.runWith({minInstances: 1}).https.onCall( ...
Deploy via firebase deploy --only functions
The command line should prompt you with a quote and confirmation input such as:
functions: The following functions have reserved minimum instances. This will reduce the frequency of cold starts but increases the minimum cost. You will be charged for the memory allocation and a fraction of the CPU allocation of instances while they are idle.
With these options, your minimum bill will be $153.75 in a 30-day month
? Would you like to proceed with deployment? (y/N)
Also a tip: I use cloud scheduler to keep my functions warm as it's a small fraction of the cost and works just as well for my use case.
Error: quota exceeded (Quota exceeded for quota group 'CPUMilliSecondsDailyNonbillable' and limit 'CPU allocation in function invocations per day' of service 'cloudfunctions.googleapis.com' for consumer 'project_number:221474907579'., Quota exceeded for quota group 'CPUMilliSecondsNonbillable' and limit 'CPU allocation in function invocations per day' of service 'cloudfunctions.googleapis.com' for consumer 'project_number:221474907579'.); to increase quotas, enable billing in your project at https://console.cloud.google.com/billing?project=samsungmap-xyz. Function cannot be executed.
my project still runs fine if i use firebase serve --only functions
but on deployment it shows the above error
It mentions somewhere "per day limit".
Does that mean i can deploy after 24 hrs and quota will reset or something?
You should be able to deploy whenever you want. The functions just won't run until the daily quota resets.
In short, we are sometimes seeing that a small number of Cloud Bigtable queries fail repeatedly (for 10s or even 100s of times in a row) with the error rpc error: code = 13 desc = "server closed the stream without sending trailers" until (usually) the query finally works.
In detail, our setup is as follows:
We are running a collection (< 10) of Go services on Google Compute Engine. Each service leases tasks from a pair of PULL task queues. Each task contains an ID of a bigtable row. The task handler executes the following query:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
If the query fails then the task handler simply returns. Since we lease tasks with a lease time between 10 and 15 minutes, a little while later the lease will expire on that task, it will be lease again, and we'll retry. The tasks have a max retry of 1000 so they can be retried many times over a long period. In a small number of cases, a particular task will fail with the grpc error above. The task will typically fail with this same error every time it runs for hours or days on end, before (seemingly out of the blue) eventually succeeding (or the task runs out of retries and dies).
Since this often takes so long, it seems unrelated to server load. For example right now on a Sunday morning, these servers are very lightly loaded, and yet I see plenty of these errors when I tail the logs. From this answer, I had originally thought that this might be due to trying to query for a large amount of data, perhaps near the max limit that cloud bigtable will support. However I now see that this is not the case; I can find many examples where tasks that have failed many times finally succeed and report only a small amount of data (e.g. <1 MB) was retrieved.
What else should I be looking at here?
edit: From further testing I now know that this is completely machine (client) independent. If I tail the log on one of the task leasing machines, wait for a "server closed the stream without sending trailers" error, and then try a one-off ReadRow query to the same rowId from another, unrelated, totally unused machine, I get the same error repeatedly.
This error is typically caused by having more than 256MB of data in your reply.
However, there is currently a bug in our server side error handling code that allows some invalid characters in HTTP/2 trailers which is not allowed by the spec. This means that some error messages that have invalid characters will be seen as this kind of error. This should be fixed early next year.
I have created a below quota which can consume API 6 times per hour. This is an verify API key authentiication type.
URL is http://damuorgn-test.apigee.net/weatherforecastforlongandlat?apikey=dJAXoH8y6GfVNJSjlDhpVIB4XCVyJZ1R
But Quota exception occurs after 8th time only (actually it should be on 7th time). Also, when i try to change quota limit and re-deploy the API proxies, still I see Quota exception on first time itself. PLease advise.
I am using free organization from cloud computing.
Quota 1
1
false
false
hour
2014-6-11 19:00:00
20
5
Okay two things...
1) Your Quota is set to <Distributed>false</Distributed>.
By default your Apigee instance runs on two separate Message Processors (the servers that do the heavy lifting). This means that each MP will count and with a Quota of 6 you effectively have 6 * 2 servers = 12.
2) Your Quota is Distributed but Asynchronous in the second example.
If you don't set <Distributed> and <Synchronous> to false, Apigee will share Quota counts by checking in with the central data server. There will always be some lag with this, but you have set your AsynchronousConfiguration to check in with the central server every 20 seconds or every 5 messages, so you could, count up to 5 on each MP processor before checking in with the other servers.
Keep in mind that in a distributed processing model like Apigee you will never get an absolutely precise number because even with Distributed set to true and Asynchronous set to false there will always be some lag with the servers talking to each other.
Also, you might want to strip out the request.header.quota_count and other request.header variables -- if I passed a number (say 100000) as a header like
quota_count: 100000
Apigee will use the 100000 rather than your value of 1 (it uses the referenced variables and rolls back to the default value only if the reference is NULL).
And... you probably want to add an <Idnetifier ref="client_ip"> or something otherwise the quota is global to all users. See the Apigee variables reference at http://apigee.com/docs/api-services/api/variables-reference for the variables that are available in every flow.