Specifying RAM in firebase cloud function - firebase

Trying to boost the memory of an existing cloud function using the GCP functions console. I've been able to perform this operation before using Edit Function/Variable, Networking, and Advanced Settings, but now after specifying my preferred memory limit I am taken to a secondary screen asking me to upload my code. This seems redundant, as I'd just like to redeploy with more RAM.

Memory, CPU, region, runtime, and so on are all options you choose at the time of deployment. They are not dynamically configurable options that you can simply tweak over time without redeploying. You're being asked to upload your code, because you need to go through the process of deployment again. The system doesn't assume that you wan to use exactly what you had previously.
Since you tagged this with "firebase", I'll assume that you're using the Firebase CLI for deployment. In that case, you will have to change the builders in your code to use the new settings you want to apply to the next deployment.

Related

Firebase Cloud Functions -- package all in a single VS Code project, or create multiple VS Code projects?

I am new to cloud functions and a little unclear about the way they are "containerized" after they are written and deployed to my project.
I have two quite different sets of functions. One set deals with image storage and firebase, another deals with some time consuming computations. They two sets (lets call them A and B) of functions use different node modules and have no dependecies on each other, except they both use Firestore.
My question is wehther it matters if I put all the functions in a single VS Code project, or if I should split them up in separate projects? One question is on the deployment side? (It seems like you deploy all the functions in the project when you run firebase deploy changes, even if some of the functions haven't changed, but probably more important is whether or not functions which don't need sharp or other other image manipulation packages are "containerized" together with other functions which maybe need stats packages and math related packages, and does it make any difference how they are organized into projects?
I realize this question is high level and not about specific code, but its not so clear to me from the various resources what is the appropriate way to bundle these two sets of unrelated cloud functions to not waste a lot of unecessary loading once theya re deployed out to Firestore.
Visual studio code project is simply a way to package your code. You can create 2 folder in your project, one for each set of function with their own firebase configuration.
Only the source repository can be a constraint here, especially if 2 different teams work on the code base and each one doesn't need to see the code of the other set of functions
In addition, if you open a VS code project with the 2 set of functions, it will take more time to load them and to lint them.
On Google Cloud side, each functions are deployed in their own container. Of course, because the packaging engine (Buildpack) doesn't know, the whole code is added inside the container. When the app start, the whole code is loaded. More you have code, longer will be the init.
If you have segregate your set of functions code in different folder in your project, only the code for the set A will be embedded in the container of functions A, and same thing for B.
Now, of course, if you put all the functions at the same level and the functions doesn't use the same data, the same code and so on, it's:
The mess to understand which function do what
The mess in the container to load too much things
So, it's not a great code base design, but it's beyond the "Google Cloud" topic, and an engineering choice.
Initially I was really confused on GCP project vs VS Code IDE project...
On a question about how cloud functions are "grouped" into containers during deployment - I strongly believe that each cloud function "image" is "deployed" into its own dedicated and separate container in the GCP. I think Guillaume described it absolutely correctly. At the same time, the "source" code packed into an "image" - might have a lot of redundancies, and there may be plenty of resources, which are not used by the given cloud function. it may be a good idea to minimize that.
I also would like to suggest, that neither development nor deployment process should depend on the client side IDE, and ideally the deployment should not happen from the client machine at all, to eliminate any local configuration/version variability between different developers. If we work together - I may use vi, and you VS Code, and Guillaume - GoLand, for example. There should not be any difference in deployment, as the deployment process should take all code from (origin/remote) git repository, rather than from the local machine.
In terms of "packaging" - for every cloud function it may be useful to "logically" consolidate all required code (and other files), so that all required files are archived together on deployment, and pushed into a dedicated GCS bucket. And exclude from such "archives" any not used (not required) files. In that case we might have many "archives" - one per cloud function. The deployment process should redeploy only modified "archives", and don't touch unmodified cloud functions.

How to delete outdated Firebase Cloud function containers from GC Storage?

So recently Firebase started charging for Cloud Functions container storage: https://firebase.google.com/pricing
No free usage $0.026/GB
I have deployed 2 functions several times (no more than 10 times, can't remember exact count, but this is still pretty low, IMO). Now I am already billed a small amount (fractions of a cent for now). So seems that if I deploy the functions another few dozens of times, I'll get close to a dollar, because old (and unused) containers are not deleted from the storage bucket.
Is there a way to safely delete outdated, not used containers to free some space? Well, it may seem that a few cents are not worth the time, but still, that's not what a free tier should be like.
I found the only robust solution to this ongoing issue (for now) is to periodically remove all of the artifact files (following Doug's instructions). As noted by others, removing some of the files can cause subsequent deploy errors (I experienced these).
IMPORTANT: Only delete the artifact files, NOT the folders as this can also cause issues.
You can do partial or full deploys as normal without any issues (it seems that the artifact files are only referenced during the build/deploy process).
Not ideal by a long shot, but at least reduces the storage usage to the minimum (until it starts accumulating again).
Edit: I have experimented with Lifecycle rules in the artifacts bucket to try and automate the clearing out of the container, but the parameters provided cannot guarantee that ALL will be cleared in one hit (which you need it to).
For convenience, you can see the artifacts bucket from within the Firebase Storage UI by selecting the "Add Bucket" option and importing the buckets from GCP.
Go to the Cloud console
Select "Cloud Storage -> Browser" from the products in the hamburger menu.
You will see multiple storage buckets there. Simply dig in to the buckets that start with "artifacts" or end with "cloudbuild" and delete the old files (by date) that you don't want.
In case of Firebase Cloud Functions you can see from their documentation (lifecycle of a background function section):
When you update the function by deploying updated code, instances for older versions are cleaned up along with build artifacts in Cloud Storage and Container Registry, and replaced by new instances.
When you delete the function, all instances and zip archives are cleaned up, along with related build artifacts in Cloud Storage and Container Registry. The connection between the function and the event provider is removed.
It means that there is no need to manually cleanup and firebase deploy scripts are doing it automatically.
You should not remove build artifacts since cloud functions are scaling automatically and new instances are built from theese artifacts.
I don't really think that cost is nearly a problem since it's 0.026$/GB so you need very approximately about 76 functions to pay 1$ for their artifacts storage (I take approx size of 1 function's artifacts as 500mb). Also artifacts size should not grow up with every functions since it's basically size of dependencies which are more or less independent from number of deployed functions.

Firebase storage artifacts is huge and keeps increasing

I've just noticed that my app's storage started to increase significantly.
After having a closer look, it appeared that this was caused by the "artifacts" bucket.
I can see that the "artifacts" storage keeps increasing by about ~800Mb every week which worries me to say the least.
I assume this is related to firestore functions deploys (or not?), but is this really expected? Can I cleanup this artifacts safely?
Appreciate any suggestions on how to safely handle storage size in this case and to keep its consumption at minimum.
Figured out a solution - it appeared there is a way to setup an auto deletion rule in google cloud console for those images that clutter the storage.
go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser
Select the "artifacts" bucket
Under the "lifecycle" tab add a rule to auto delete old images (in my case I put "delete after 1 day since update" which works fine for me)
Storage is safe now!
NOTE: if you face any deployment issues later, like if you deploy several days in a row and if it gives you an error on deploy, just delete the whole "container" folder manually in the artifacts which should solve it and then redeploy again. (make sure not to delete the artifacts bucket itself!)
Hope the firebase team will improve that - the current behavior looks confusing as it easily leads to an unexpected bill unless you take extra steps to prevent that. But you'll never know that it will happen until it does.
I assume this is related to firestore functions deploys (or not?), but is this really expected?
Yes, it's expected. Every time you deploy functions, Cloud Build will use a dedicated Cloud Storage space for the built docker image, and retain it until you delete it.
Can I cleanup this artifacts safely?
Yes, but then you won't be able to easily revert to a prior image. You would have to deploy again from your own source code.
On top of the GCP's Life Cycle settings for artifacts images, you can also consider the following for further optimization and costs reduction of your Firebase Functions deployment:
Clean up your functions folder, don't put unnecessary files in it, as we do not know if Google will only upload files by dependencies or by the whole functions folder. Feel free to refine this item if anyone of you can confirm this.
Remove unnecessary dependencies from functions/package.json, functions/node_modules and require statements from your JS files, e.g. functions/index.js.
Compact and compress your function's JS files by removing unnecessary comments, console loggings etc. You can achieve this with the help of grunt and uglify NPM packages. Again, we're not sure if the Cloud Build (or any of the Google functions deployment system) will auto-compress the function's images for us before storing them into the Container Registry or Cloud Storage (please refine this item if you have better answer).
Organize your functions properly by creating relevant function groups so that you can deploy only certain group(s) of function rather than simply firebase deploy --only functions.
If necessary, write codes that automatically detect and resolve environmental differences, e.g. environment variables from local emulators to production/staging, because the Firebase emulators and production environments may not be 100% consistent. If you don't do that, you may end up needing to deploy several times per day due to certain negligence -- this will spike up your deployment cost.
If necessary, change your deployment plan: from daily to weekly, or even from weekly to monthly, depending on your monthly budgets, criticality, and urgency.
Lastly, I hope the community can also help to add more recommended costs reduction plans and strategies on this post in order to help some small businesses and individuals to survive better on Firebase and Google Cloud Platform as a whole. Even just some links to good articles would help. Thanks!

Google Cloud Functions - Custom library used in many function

I have 2 functions on Google Cloud Functions, using python, that use the same library.
The file organization I have is:
/libs/libCommon.py
/funcA/main.py
/funcB/main.py
Both function A and function B use libCommon.
Through the docs I only see ways for including subdirectories.
There is no clear way to include a parent directory.
What's the best way to organize the code?
Thanks
You can't share code between functions. However you have several solution to achieve this:
Create a package, deploy on PiPy and add this as dependency in your requirements.txt file. Issue -> PiPy is public without souscription
Create a deployment script which copy the source where they should be, and then run the gcloud command -> I'm not fan of scripting, especially if your project becomes complex
Use Cloud Run instead of Function. You can create 2 different containers or only one with 2 entry points. Cloud Run has many advantages.
If your request can be processed in parallel on the same instance, you can save money.
If not, set the concurrency param to 1 (same behavior as function).
Your code can be shared between several endpoints.
Your code is portable
Your service has always 1vCPU for processing, memory is customizable. You can also save money compare to function
Of course, I'm Cloud Run fan, but I think it's the best solution. About Storage event, it's not an issue. Set up a notification to publish storage event to PubSub and then set up a Push Subscription to your service

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Resources