After reading the docs on both Cloud Run and Firebase Cloud Functions, I have a few questions I want to clear up:
Does Cloud Run basically act as a container image storage/deploying mechanism? If I have 2 websites and have them as separate containerized images, does Cloud Run just deploy the specified one given the trigger?
Integrating Cloud Run with Firebase Cloud Functions as the trigger, will there be an additional layer of latency? While latency times are never known, FCFs inherently have warm-up times due to cold starts, will there be added latency due to Cloud Run cold-starting the images?
Does the Cloud Run images travel through the FCF to get to the user. Or does FCF merely redirect the user directly to the Cloud Run image?
Basically, is it like
Client -> FCF -> Image -> FCF -> Client
or
Client -> FCF -> Image -> Client
Typically on Stack Overflow one is supposed to limit to one question per post (to avoid being closed as "too broad"), but I'll try here. Please address followup questions as new posts.
Yes, it is just a containerized way of serving HTTP requests.
Cloud Run is not directly related to Cloud Function except where you write code to connect them. If you write a Cloud Function trigger that proxies to Cloud Run, it will incur all the latency costs of both products, as required (not all invocations require a cold start). All "serverless" compute options have a cold start time, since they all scale down to zero (based on current load), and you don't pay for virtual server instances to be allocated and immediately available all the time.
Again, they are not related. You can use either one without the other.
Related
I am currently learning Firebase Cloud Functions and wanted to understand if the express app is always running, my assumption is that it is always running on the Firebase platform otherwise how does it handle a request?
Your Express app on Cloud Functions runs in a container, and the default behavior is to spin down that container after there have not been any requests for a while (the exact period is undocumented). Since you're only paying for the time it's actually actively processing requests, otherwise Cloud Functions would end up with lots of containers that nobody is paying for.
When a new request comes in when there is no container instance, or while all the existing containers are busy, Cloud Functions spins up a new container to process that request, which it then will also shut down when it's been inactive for a while.
Since spinning up the container takes some time, you can configure Cloud Functions to keep a certain minimum number of containers active. But I'd usually recommend only looking at that when you're actively experiencing performance problems due to the startup time of new containers, as you'll pay (a lower amount) for each container that is kept active while not processing a request.
I have my app hosted in firebase and using cloud functions to fetch data from a third-party API. This cloud function is HTTP triggered and runs whenever client-side requests data. I want to reduce cloud functions calls (as it is currently on Blaze plan), so thinking to add caching in my app.
What are the caching strategies I can use on the client (web browser) as well as server-side (Node.js)?
Overall I want to reduce cloud function calls to minimize costs, so that on every client request, cloud function doesn't need to be called, instead the client can fetch data from cache.
Should I use firebase real-time database for storing data from third-party API and update this in firebase over a period of time, so that data is up-to-date? Data on the third-party side doesn't change frequently.
Whether fetching data from real-time database (as mentioned in point 3 above) instead of cloud function calls would eventually reduce costs?
If you host your Cloud Function behind Firebase Hosting, you can write a cache header in every response and manage the cache behavior of Firebase Hosting that way. Allowing the content to be served from Firebase Hosting's CDN may significantly reduce the number of times your Function gets executed, and thus its cost.
I was deploying very small size Cloud Function for testing, there were errors multiple times as I was doing egress network request which were not supported on the Spark plan and some other code related errors.
I upgraded to Blaze plan and was able to deploy the function successfully. Now suddenly my Bandwidth usage under Cloud Storage shot to 412 MB (41% of 1 GB of free tier). Going through other similar questions (will Cloud Function affect Firebase Storage bandwidth usage?) I expected it to increase but is such high increase expected and how do I reduce it?
Also I found here that Cloud Storage also increases for some people: Cloud Storage increasing on deploying Cloud Function. In my case the bytes stored are still 0, can someone explain this to me?
What I am doing is a payment gateway integration on my app and they have asked some code to be run on server side (currently I am doing it on app code itself for testing). I am moving this code to cloud functions. I am just afraid if there was this much increase for this small function and how much I will end up till this goes live?
the storage usage is just for the deployment of the function that is the deployment process. The reason you are seeing 0 storage is because the storage bandwidth is related to deployment (if the function itself does not use storage API), so it will not increase.
you can limit the capacity and quotas for any GCP products by using a budget available in GCP. with that you can set a budget for any Google products by setting an alert. Please refer to this link here: In this way you can set up a limit inturn you can monitor the limits as well.
And also for the Cloud function you can use the above link to set the budget or you can use refer to this link. To estimate the budget for the Cloud Functions.
For the Firebase, Please have a look here for Firebase pricing.Hope this works for you!
I am trying to figure out why firebase storage usage is far above my expectation
I only have few photo files in my Firebase storage, just around 75 photos, 100kb for each photo. but my bytes stored and object counts is way above my expectation as you can see in the image above. in this case, maybe I find the answer from the documentation in here
When you deploy your function's source code to Cloud Functions, that
source is stored in a Cloud Storage bucket. Cloud Build then
automatically builds your code into a container image and pushes that
image to Container Registry. Cloud Functions accesses this image when
it needs to run the container to execute your function.
The process of building the image is entirely automatic and requires
no direct input from you
it is probably because I create a lot of cloud function. thats why the bytes stored and object count in my Firebase Storage is big
now I need to know why storage bandwidth is up to 20.2 GB in a month. I am still developing my app, the user is just me. I don't think I will hit 20.2 GB in a month, because in my Android app, I use cache when showing the image.
I am suspicious, the reason why my storage bandwidth usage is too high is because of cloud function. in August, I perform a lot of firebase deploy to my cloud function. will Cloud Function affect Firebase Storage bandwidth usage ?
I am in Indonesia, my Firebase Storage and Firestore are located in asia-southeast2, but my Cloud Function is located in asia-east2. my cloud function perform some operation to my firestore and images in storage. but still I don't think it will hit 20.2 GB per month
as you can see from the image above, the bandwidth usage is separated into 3 different parts
asia.artifacts.projectID.appspot.com
gcf-sources-5900904-asia-east2
projectID.appspot.com
the asia.artifacts.projectID.appspot.com seems way above the other, it is up to 4.3 GB
thats some information of my problem. so I need to know, will cloud function deployment/operation affect my Firebase Storage Bandwidth usage ?
I need to understand why this is happened, because I am worried that I will have unexpected cost if a lot of users use my app.
I was surprised by the bandwidth usage too.
First, Firebase Changed some policy
After August 17, 2020, each deployment operation will incur small-scale charges for the >storage space used for the function's container. For example, if your functions consume ?>1GB of storage via Container Registry, you'll be billed $0.026 per month. If your >development process depends on deploying functions for testing, you can further minimize >costs by using the Firebase Local Emulator Suite during development.
So the most reasonable explanation is cloud function build during npm install generates bandwidth usage of downloading node_modules and also takes up storage.
I would suggest use local emulator during development as much as possible, but many cases production test is inevitable, so sad.
The part of the documentation you quoted that's relevant here is:
Cloud Functions accesses this image when it needs to run the container to execute your function.
The built images are being accessed by Cloud Functions in order to run your code that's built into the image. Although the documentation doesn't specify, I would guess that every cold start of a function requires a download of the image from the artifacts bucket. This would explain the usage on that bucket.
I currently working on a sync third party data project, that I decided to implement queue to ensure the message has received to the worker, but the worker need to do only one task at the time, which means the previous task should finished and acknowledged before executing the next one.
so the question is, how can I config the firebase pub/sub trigger to execute task one by one ?.
If I misunderstand on the concept of google pub/sub feel free to tell me :)
What you're trying to do isn't really compatible with the way pubsub functions work. There are other Google Cloud products that can help you limit the rate of invocation of some code, but it won't be implemented with a pubsub type function.
GCP offers a product called Cloud Tasks, which lets you set up a queue that can be configured with a rate limit. The queue can dispatch a task invocation to App Engine, or to an HTTP type Cloud Function (beta). So, if you've deployed an HTTP function, you could use its endpoint URL to configure a Cloud Tasks queue that invokes that function only once at a time.
You can control the number of instances of Cloud Functions with the MAX-INSTANCES flag of gcloud command line tool. See the documentation on Using max instances for full details.
The Firebase SDK for Cloud Functions doesn't wrap this option (yet). But you can deploy the function using the Firebase CLI, and then change the setting in the Cloud console, by following the steps shown in Clearing max instances limits.