Firebase Cloud Functions generated page flagged with reduce server response time - firebase

I have a Firebase website project, right now everything looks good except for the Firebase Cloud Functions generated pages flagged with reduce server response time on pagespeed insights. Insights would result 0.5 - 1.5 seconds.
Static pages get a score of 99 on mobile and 86 on desktop. With the cloud functions generated pages however the highest I get for mobile is 75 and lowest drops to as low as 30.
Here is some background on what is happening:
The page being requested is identified to ready the values needed for the Firestore queries.
4 Firestore queries (3 collections with <100 documents and 1 with <5000 documents, which are very small).
Data from the queries are organized and prepped for render.
Page renders with res.render()
Original Build:
Cloud Functions
Express
Cors
Handlebars (as the template engine)
This build would usually get me a score of 60-75. Server response time is still high but it is consistent within the range 0.5 - 1.5
Alternative Build:
This build I use the same code for the queries except for the template engine rendering step.
Cloud Functions
Express
Cors
PUG (as the template engine)
This build gives both the best and the worst scores. I don't get a medium score on this of 60-75, The highest I got is a perfect 100 and the lowest is a very low 30. The bad thing about this is that it is very inconsistent and I can't determine the reason.
Is it a Cold Start problem?
I have read about cold start so I tried experimenting if it was the problem.
First Attempt
I made a function that would ping the functions used by my Cloud Functions. I started it up and put it on a loop every 10seconds on my desktop. After a few minutes I checked Pagespeed if it had any effects, no improvements.
Second Attempt
I made a function that would fetch the page I am testing on Pagespeed. I started it up and put it on a loop also on my desktop. I checked Pagespeed and it also did not have any effects.
So I guess I should cross out the Cold Start issue? Or am I handling the cold start issue wrong? Or maybe its a templating thing since using two different template engines have different results?
Anyway, this is what I have done so far, any idea on how to reduce the server response time of Firebase Cloud Functions generated pages?

Cold start might be your problem.
If you keep one Cloud Function warm it will keep a container with that function warm - so that the next user can come and use the function in a warm state.
However - if you have multiple users attempting to use the cloud function it will allocate one warm container to a connection, then another couple of cold containers to the other users. It's how they scale - building out more containers.
I'm assuming whats happening is your tests are sometimes connecting to the warmed container - and when they are in use the tests are having to wait for a cold start on the other containers.
Read up on Cold Starts
A new function instance is started in two cases:
When a new function instance is automatically created to scale up to the load, or occasionally to replace an existing instance.

Related

Using a single firebase http function for all request types?

Like many others, I'm struggling with cold starts for most of my cloud functions. I did all the optimization steps that the documentation recommended, but because almost all of my functions are using the admin SDK, and Firestore, a typical cold start takes around 5 seconds, meaning that the user has to wait at least 5 seconds for the data.
I'm thinking on replacing all my http trigger cloud functions with a single one, like /api/* and set it up to have some amount of minInstances.
What are the cons of this approach? What is the point of having multiple http functions in the first place?
TL;DR; Why shouldn't I use a single http cloud function for my REST like api to handle all the resources in order to eliminate cold starts?
Why shouldn't I use a single http cloud function for my REST like api to handle all the resources in order to eliminate cold starts?
Because this won't eliminate cold starts. In fact, you will be loading a bigger function every time you invoke your /api/* root function.
As in the example from this video, the point of having multiple functions is to instance them as needed, and also to be isolated between them.
To reduce cold starts, you could set
a minimum number of instances that Cloud Functions must keep ready to serve requests. Setting a minimum number of instances reduces cold starts of your application.
See also:
Cloud Functions Cold Boot Time (Cloud Performance Atlas)

Firebase Cloud Functions min-instances setting seems to be ignored

Firebase has announced in September 2021 that it is possible now to configure its cloud function autoscaling in a way, so that a certain number of instances will always be running (https://firebase.google.com/docs/functions/manage-functions#min-max-instances).
I have tried to set this up, but I can not get it to work:
At first I have set the number of minimum instances in Google Cloud Console: Cloud Console Screenshot
After doing this I expected that one instance for that cloud function would run at any time. The metrics of that function indicate that it instances were still scaled down to 0: Cloud functions "Active Instances Metric"
So to me it looks a bit as if my setting is ignored here. Am I missing anything? Google Cloud Console shows me that the number of minimum instances has been set to 1 so it seems to know about it but to ignore it. Is this feature only available in certain regions?
I have also tried to set the number of minimum instances using the Firebase SDK for Cloud Functions (https://www.npmjs.com/package/firebase-functions). This gave me the same result, my setting is still ignored.
According to the Documentation, the Active Instances metrics shows the number of instances that are currently handling the request.
As stated in the Documentation :
Cloud Functions scales by creating new instances of your function. Each of these instances can handle only one request at a time, so large spikes in request
volume often causes longer wait times as new instances are created to handle the demand.
Because functions are stateless, your function sometimes initializes the execution environment from scratch, which is called a cold start. Cold starts can take
significant amounts of time to complete, so we recommend setting a minimum number of Cloud Functions instances if your application is latency-sensitive.
You can also refer to the Stackoverflow thread where it has been mentioned that
Setting up minInstances does not mean that there will always be that much number of Active Instances. Minimum instances are kept running idle (without CPU > allocated), so are not counted in Active Instances.

Firestore emulator performance vs actual

I have setup the firebase local emulator and created a project with cloud functions and firestore. I also exported my production data to the project which I import into the emulator. (the collection in question is about 5000 documents ranging from 5kb to 200kb in size)
My goal was to benchmark query performance, so I wrote a query and ran it a number of times to get an average execution time of 130 ms. I then wrote a different query to get an average execution time of 20 ms. I did not import any indexes (the admin sdk doesn't seem to require them when querying the emulator like it does what querying production).
I also observed the first query always takes significantly longer.
My question is basically, how does this difference in execution time translation to the production environment if at all. Assuming the same queries are run against the same data, and ignoring network latency to/from the client. Will the second query run about ~110ms faster? Or will the difference be less/more?
Also why does the first query take longer, and is there any way to use that fact to improve performance in real world usage?
how does this difference in execution time translation to the production environment if at all.
The observed performance of the emulator has little to nothing to do with the performance of the actual cloud hosted product. It's not the same code, and it's not running on the same set of computing resources.
Firestore is massively scalable and shards your data across many computing resources, all of which work together to service a query and ensure that it performs at any scale. As you can imagine, an emulator running on your one local machine is nowhere near that. They are simply not comparable.
The emulator is meant to ease local development without requiring the use of paid cloud resources to get your job done. It's not meant for any other purpose.

Firebase Functions, Cold Starts and Slow Response

I am developing a new React web site using Firebase hosting and firebase functions.
I am using a MySQL database (SQL required for heavy data reporting) in GCP Cloud Sql and GCP Secret Manager to house the database username/password.
The Firebase functions are used to pull data from the database and send the results back to the React app.
In my local emulator everything works correctly and its responsive.
When its deployed to Firebase Im noticing the 1st and sometimes the 2nd request to a function takes about 6 seconds to respond. After that they respond less than 1 sec. For the slow responses I can see in the logs the database pool is initialized.
So the slow responses are the first hit to the instance. Im assuming in my case two instances are being created.
Note that the functions that do not require a database respond quickly regardless of it being the 1st or 2nd call.
After about 15 minutes of not using a service I have the same problem. Im assuming the instances are being reclaimed and a new instance is being created.
The problem is each function will have its own independent db pool so each function will initially provide a slow response (maybe twice for the second call).
The site will see low traffic meaning most users will experience this slow response.
By removing the reference to Secret Manager and hard coding username/password the response has dropped to less than 3 seconds. But this is still not acceptable.
Is there a way to:
Increase the time that a function is reclaimed if not used?
Tag an instance that it should not be reclaimed?
Is there a way to create a global db pool that does not get shutdown between recycles?
Is there an approach to work with db connections in Firebase Functions to avoid reinit of the db pool?
Is this the nature of functions and Im limited to this behavior?
Since I am in early development, would moving to AppEngine/Node.js (the Flexible Plan) resolve recycling issues?
First of all, the issues you have been experiencing with the 1st and the 2nd requests taking the longest time are called cold starts.
This totally makes sense because new instances are spun up. You may have a cold start when:
Your function has been deployed but not yet triggered.
Your function has been idle(not processing requests) enough that it has been recycled for resources.
Your function is auto-scaling to handle capacity and creating new instances.
I understand that your five questions are intended to work around the issue of Cloud Functions recycling the instances.
The straight answer from questions 1 to 4 is No because Cloud Functions implement the serverless paradigm.
This means that one function invocation should not rely on in-memory state(database pool) set by a previous invocation.
Now this does not mean that you cannot improve the cold start boot time.
Generally the number one contributor of cold start boot time is the number of dependencies.
This video from the Google Cloud Tech channel exactly goes over the issue you have been experiencing and describes in more detail the practices implemented to tune up Cloud Functions.
If after going through the best practices from the video, your coldstart shows unacceptable values, then, as you have already suggested, you would need to use a product that allows you to have a minimum set of instances spun up like App Engine Standard.
You can further improve the readiness of the App Engine Standard instances by later on implementing warm up requests.
Warmup requests load your app's code into a new instance before any live requests reach that instance. The last document talks about loading requests which is similar to cold starts in which it is the time when your app's code is being loaded to a newly created instance.
I hope you find this useful.

Slow Transactions - WebTransaction taking the hit. What does this mean?

Trying to work out why some of my application servers have creeped up over 1s response times using newrelic. We're using WebApi 2.0 and MVC5.
As you can see below the bulk of the time is spent under 'WebTransaction'. The throughput figures aren't particularly high - what could be causing this, and what are the steps I can take to reduce it down?
Thanks
EDIT I added transactional tracing to this function to get some further analysis - see below:
Over 1 second waiting in System.Web.HttpApplication.BeginRequest().
Any insight into this would be appreciated.
Ok - I have now solved the issue.
Cause
One of my logging handlers which syncs it's data to cloud storage was initializing every time it was instantiated, which also involved a call to Azure table storage. As it was passed into the controller in question, every call to the API resulted in this instantiate.
It was a blocking call, so it added ~1s to every call. Once i configured this initialization to be server life-cycle wide,
Observations
As the blocking call was made at the time of the Controller being build (due to Unity resolving the dependancies at this point) New Relic reports this as
System.Web.HttpApplication.BeginRequest()
Although I would love to see this a little granular, as we can see from the transactional trace above it was in fact the 7 calls to table storage (still not quite sure why it was 7) that led me down this path.
Nice tool - my new relic subscription is starting to pay for itself.
It appears that the bulk of time is being spent in Account.NewSession. But it is difficult to say without drilling down into your data. If you need some more insight into a block of code, you may want to consider adding Custom Instrumentation
If you would like us to investigate this in more depth, please reach out to us at support.newrelic.com where we will have you account information on hand.

Resources