Google Cloud Functions - Custom library used in many function - firebase

I have 2 functions on Google Cloud Functions, using python, that use the same library.
The file organization I have is:
/libs/libCommon.py
/funcA/main.py
/funcB/main.py
Both function A and function B use libCommon.
Through the docs I only see ways for including subdirectories.
There is no clear way to include a parent directory.
What's the best way to organize the code?
Thanks

You can't share code between functions. However you have several solution to achieve this:
Create a package, deploy on PiPy and add this as dependency in your requirements.txt file. Issue -> PiPy is public without souscription
Create a deployment script which copy the source where they should be, and then run the gcloud command -> I'm not fan of scripting, especially if your project becomes complex
Use Cloud Run instead of Function. You can create 2 different containers or only one with 2 entry points. Cloud Run has many advantages.
If your request can be processed in parallel on the same instance, you can save money.
If not, set the concurrency param to 1 (same behavior as function).
Your code can be shared between several endpoints.
Your code is portable
Your service has always 1vCPU for processing, memory is customizable. You can also save money compare to function
Of course, I'm Cloud Run fan, but I think it's the best solution. About Storage event, it's not an issue. Set up a notification to publish storage event to PubSub and then set up a Push Subscription to your service

Related

Firebase Cloud Functions -- package all in a single VS Code project, or create multiple VS Code projects?

I am new to cloud functions and a little unclear about the way they are "containerized" after they are written and deployed to my project.
I have two quite different sets of functions. One set deals with image storage and firebase, another deals with some time consuming computations. They two sets (lets call them A and B) of functions use different node modules and have no dependecies on each other, except they both use Firestore.
My question is wehther it matters if I put all the functions in a single VS Code project, or if I should split them up in separate projects? One question is on the deployment side? (It seems like you deploy all the functions in the project when you run firebase deploy changes, even if some of the functions haven't changed, but probably more important is whether or not functions which don't need sharp or other other image manipulation packages are "containerized" together with other functions which maybe need stats packages and math related packages, and does it make any difference how they are organized into projects?
I realize this question is high level and not about specific code, but its not so clear to me from the various resources what is the appropriate way to bundle these two sets of unrelated cloud functions to not waste a lot of unecessary loading once theya re deployed out to Firestore.
Visual studio code project is simply a way to package your code. You can create 2 folder in your project, one for each set of function with their own firebase configuration.
Only the source repository can be a constraint here, especially if 2 different teams work on the code base and each one doesn't need to see the code of the other set of functions
In addition, if you open a VS code project with the 2 set of functions, it will take more time to load them and to lint them.
On Google Cloud side, each functions are deployed in their own container. Of course, because the packaging engine (Buildpack) doesn't know, the whole code is added inside the container. When the app start, the whole code is loaded. More you have code, longer will be the init.
If you have segregate your set of functions code in different folder in your project, only the code for the set A will be embedded in the container of functions A, and same thing for B.
Now, of course, if you put all the functions at the same level and the functions doesn't use the same data, the same code and so on, it's:
The mess to understand which function do what
The mess in the container to load too much things
So, it's not a great code base design, but it's beyond the "Google Cloud" topic, and an engineering choice.
Initially I was really confused on GCP project vs VS Code IDE project...
On a question about how cloud functions are "grouped" into containers during deployment - I strongly believe that each cloud function "image" is "deployed" into its own dedicated and separate container in the GCP. I think Guillaume described it absolutely correctly. At the same time, the "source" code packed into an "image" - might have a lot of redundancies, and there may be plenty of resources, which are not used by the given cloud function. it may be a good idea to minimize that.
I also would like to suggest, that neither development nor deployment process should depend on the client side IDE, and ideally the deployment should not happen from the client machine at all, to eliminate any local configuration/version variability between different developers. If we work together - I may use vi, and you VS Code, and Guillaume - GoLand, for example. There should not be any difference in deployment, as the deployment process should take all code from (origin/remote) git repository, rather than from the local machine.
In terms of "packaging" - for every cloud function it may be useful to "logically" consolidate all required code (and other files), so that all required files are archived together on deployment, and pushed into a dedicated GCS bucket. And exclude from such "archives" any not used (not required) files. In that case we might have many "archives" - one per cloud function. The deployment process should redeploy only modified "archives", and don't touch unmodified cloud functions.

Specifying RAM in firebase cloud function

Trying to boost the memory of an existing cloud function using the GCP functions console. I've been able to perform this operation before using Edit Function/Variable, Networking, and Advanced Settings, but now after specifying my preferred memory limit I am taken to a secondary screen asking me to upload my code. This seems redundant, as I'd just like to redeploy with more RAM.
Memory, CPU, region, runtime, and so on are all options you choose at the time of deployment. They are not dynamically configurable options that you can simply tweak over time without redeploying. You're being asked to upload your code, because you need to go through the process of deployment again. The system doesn't assume that you wan to use exactly what you had previously.
Since you tagged this with "firebase", I'll assume that you're using the Firebase CLI for deployment. In that case, you will have to change the builders in your code to use the new settings you want to apply to the next deployment.

Firebase cloud function inline code editor

I'm trying to reduce the cold start time on my firebase cloud function. I have around 30 functions that use different imports.
As in an info video mentioned, it's better to use only the imports that your cloud function needs.
In the google cloud console, you can view your code.
But if I scroll down the LIB/INDEX.JS contains all my functions.
There's an option to edit the code.
Would it be harmful to delete all other functions & the imports that aren't used (for that specific function in LIB/INDEX.JS) with the inline code editor? (Even though I made my functions via typescript in visual studio code).
Thanks!
No, it would not be harmful to the other functions. If you are editing the code of a Cloud Function in the console, it will not modify the code used by other Cloud Functions. Each function is fully isolated from each other, even if they share the same deployment. The code is copied between each function.
That said, editing functions deployed by the Firebase CLI should only be done in experimentation. When it comes time to actually deploy code, you should again use the CLI to finalize everything.

How to use aria2c on cloud function?

In the same way that a cloud function can run the ffmpeg, is possible download and run aria2c? If yes, how?
PS. Cloud Run isn't an option right now.
Edit: Something like this https://blog.qbatch.com/aws-lambda-custom-binaries-support-available-for-rescue-239aab820d60
Executing custom binaries like aria2c in the runtime are not supported in Cloud Functions.
You can find a hacky solution here: Can you call out to FFMPEG in a Firebase Cloud Function This involves having a statically linked binary (so you might need to recompile aria2c as I'm assuming it won't be statically linked by default and it'll rely on more system packages like libc, libxxxx...) and bundling this library to your function deployment fackage.
You should really consider using Cloud Run for this use case. Cloud Run gives you the flexibility of creating your own container image that can include the binaries and libraries you want.
You can find a tutorial that bundles custom binaries on Cloud Run here: https://cloud.google.com/run/docs/tutorials/system-packages

Creating a one off cron job using firebase and app engine

I'm creating an app where a user can create a piece of data that could be presented in the ui at a later date. I'm trying to find a way to create cron entries dynamically using either java code (for android devices) or node.js code (firebase cloud function generates a cron job). I haven't been able to find a way to do it dynamically and based on what I read it may not be possible. Does anyone out there know a way?
Presently the only way to create GAE cron jobs is via deployment of a cron configuration file (by itself or, in some cases, together with the app code). Which today can only be done via CLI tools (from the GAE or Cloud SDKs).
I'm unsure if you'd consider programmatically invoking such CLI tools qualifying to 'create cron entries dynamically'. If you do - generation the cron config file and scripting the desired CLI-based deployment would be an approach.
But creating jobs via an API is still a feature request, see How to schedule repeated jobs or tasks from user parameters in Google App Engine?. It also contains another potentially acceptable approach (along the same lines as the one mentioned in #ceejayoz's comment))

Resources