We have a couple of functions that require a lot of dependencies to work. We have so called jar/npm/lib hell going on, and would like to limit the dependencies on function, rather than on project level. Is this possible?
Edit: trying to rephrase the question as Doug instructed: We're using Firebase functions and we'd like to isolate the function's dependencies from other function's dependencies. We need version x.y.z of dependency A for function 1 to work, but function 2 needs version f.y.z of the same dependency A to work.
I suspect that the only way around this is to deploy it to another project, but wanted to ask a question here before committing to that.
So, can we have multiple versions of the same dependency in one Firebase Functions deployment?
Edit 2: Divided the other part of the question here: Firebase Functions: is it OK to divide functions to multiple projects
The answer is no, one can't control the dependencies per function, but per deployment. This is more of a limitation of npm itself than Firebase / Google Cloud Functions.
Related
I am new to cloud functions and a little unclear about the way they are "containerized" after they are written and deployed to my project.
I have two quite different sets of functions. One set deals with image storage and firebase, another deals with some time consuming computations. They two sets (lets call them A and B) of functions use different node modules and have no dependecies on each other, except they both use Firestore.
My question is wehther it matters if I put all the functions in a single VS Code project, or if I should split them up in separate projects? One question is on the deployment side? (It seems like you deploy all the functions in the project when you run firebase deploy changes, even if some of the functions haven't changed, but probably more important is whether or not functions which don't need sharp or other other image manipulation packages are "containerized" together with other functions which maybe need stats packages and math related packages, and does it make any difference how they are organized into projects?
I realize this question is high level and not about specific code, but its not so clear to me from the various resources what is the appropriate way to bundle these two sets of unrelated cloud functions to not waste a lot of unecessary loading once theya re deployed out to Firestore.
Visual studio code project is simply a way to package your code. You can create 2 folder in your project, one for each set of function with their own firebase configuration.
Only the source repository can be a constraint here, especially if 2 different teams work on the code base and each one doesn't need to see the code of the other set of functions
In addition, if you open a VS code project with the 2 set of functions, it will take more time to load them and to lint them.
On Google Cloud side, each functions are deployed in their own container. Of course, because the packaging engine (Buildpack) doesn't know, the whole code is added inside the container. When the app start, the whole code is loaded. More you have code, longer will be the init.
If you have segregate your set of functions code in different folder in your project, only the code for the set A will be embedded in the container of functions A, and same thing for B.
Now, of course, if you put all the functions at the same level and the functions doesn't use the same data, the same code and so on, it's:
The mess to understand which function do what
The mess in the container to load too much things
So, it's not a great code base design, but it's beyond the "Google Cloud" topic, and an engineering choice.
Initially I was really confused on GCP project vs VS Code IDE project...
On a question about how cloud functions are "grouped" into containers during deployment - I strongly believe that each cloud function "image" is "deployed" into its own dedicated and separate container in the GCP. I think Guillaume described it absolutely correctly. At the same time, the "source" code packed into an "image" - might have a lot of redundancies, and there may be plenty of resources, which are not used by the given cloud function. it may be a good idea to minimize that.
I also would like to suggest, that neither development nor deployment process should depend on the client side IDE, and ideally the deployment should not happen from the client machine at all, to eliminate any local configuration/version variability between different developers. If we work together - I may use vi, and you VS Code, and Guillaume - GoLand, for example. There should not be any difference in deployment, as the deployment process should take all code from (origin/remote) git repository, rather than from the local machine.
In terms of "packaging" - for every cloud function it may be useful to "logically" consolidate all required code (and other files), so that all required files are archived together on deployment, and pushed into a dedicated GCS bucket. And exclude from such "archives" any not used (not required) files. In that case we might have many "archives" - one per cloud function. The deployment process should redeploy only modified "archives", and don't touch unmodified cloud functions.
In the guide for Firestore on deleting a whole collection, there is an example using firebase_tools.firestore.delete. Where can I find documentation about this function? I only found it mentioned in the CLI reference but with little detail.
In most cases the firebase-tools library is only used in the CLI to Firebase projects. So the documentation for that command is in the reference docs for the CLI, but its admittedly a bit lightweight for the Firestore delete command.
Luckily the firebase-tools library is open-source, so you can check exactly what it does by looking at the code. The firestore:delete command is implemented here and here.
If you run into specific problems while implementing similar functionality, or have specific questions about it, I recommend posting those specific problems/questions. For example: the command indeed deletes subcollections.
I've been using firebase ecosystem for over 2 years but as it google lacks decent documentation i often come here to ask very basic things that we should learn right after "hello world".
When using firebase functions i try to modularize it to keep it readable and easy to maintain. the way i managed to do it was by having an "index" file and multiple subfiles which contains the logic for complex functions...
although it works very well, my index file is getting super long since i'm having more and more functions and it also needs to deal with some configuration for each of those specific functions...
i was messing around firebase dashboard https://console.cloud.google.com/functions/list? and i found that is possible to create a new function over this online form... when doing it the firebase backeend somehow create a new "runtime" for this function. I mean each function created by this form has its own "index.js" "package.json"
how can i do this without need to create every function from this form?
how can i simple code a new function ecosystem, deploy it using firebase cli and have this separeted structure for it?
All Cloud Functions are logically isolated from each other at all times at runtime. While they might share some common code at deployment, they don't share anything else.
The Firebase CLI requires that all your functions be defined in a single entrypoint, which is your index.js. That is just how it works. If you don't like that, you can deploy functions individually using gcloud, but you will not be able to use the firebase-functions module to declare and implement your function. gcloud uses different conventions.
If you want to continue to deploy with the Firebase CLI, you can add the new function to your index.js. It can be deployed separately from your other functions using the --only argument. For example, if your new function is called "fn":
firebase deploy --only functions:fn
This will deploy just fn and none of the other functions defined in your index. You can read about this and more options in the Firebase CLI documentation for deploying functions.
If you abosolutely do not want to have all your functions in a single index.js, you can split the definitions among multiple files, and require or import them into the main index.js. It's up to you how you want to organize your source file, using the facilities provided by nodejs and JavaScript.
In the same way that a cloud function can run the ffmpeg, is possible download and run aria2c? If yes, how?
PS. Cloud Run isn't an option right now.
Edit: Something like this https://blog.qbatch.com/aws-lambda-custom-binaries-support-available-for-rescue-239aab820d60
Executing custom binaries like aria2c in the runtime are not supported in Cloud Functions.
You can find a hacky solution here: Can you call out to FFMPEG in a Firebase Cloud Function This involves having a statically linked binary (so you might need to recompile aria2c as I'm assuming it won't be statically linked by default and it'll rely on more system packages like libc, libxxxx...) and bundling this library to your function deployment fackage.
You should really consider using Cloud Run for this use case. Cloud Run gives you the flexibility of creating your own container image that can include the binaries and libraries you want.
You can find a tutorial that bundles custom binaries on Cloud Run here: https://cloud.google.com/run/docs/tutorials/system-packages
I'm using Travis to automatically deploy my Firebase hosted website and cloud functions as I push to GitHub, as detailed here. However, even for my small website with a limited amount of cloud functions, deploying all of the functions takes quite a long time. Were I deploying manually, I would be able to use --only to specify precisely those functions that I actually changed. Is there a way to make this information available to Travis, so that only the necessary functions are rebuilt?
https://m.youtube.com/watch?v=iyGHW4UQ_Ts
min 30 and following
This guy solves the problem by copying all functions to a cloud bucket and then making a diff for every file. This works well if all your logic is in one file. But this is not what you want for larger projects. For my own project i used webpack to create one file for each function that includes the imports. then i generate a md5 hash for that file and save it to a functions-lock.json. with the next run i can easily check against the old hash value and only deploy the changed functions. The ci should manage the state of the lock file by uploading it to the cloud or doing some git magic
Unfortunately this isn't going to be simple to do -- the Firebase CLI deploys all of your functions because it's next-to-impossible to just analyze the code and figure out which functions are impacted (since you can require other files, you might have updated dependencies but no files changed, etc.).
One thing I can think of that might be a hack would be to have named branches for functions or groups of functions. Then you could git push to the branch of the specific function you want to deploy, and have a script that uses the branch name as a signal to pass the --only functions:<fnName> to the firebase deploy command. That's not the most glamorous solution, but, depending on how much this bugs you, it might help.
So this is a bit late but the long deployment times have bothered us for a while now.
Our solution is based on CircleCI but it should be possible to adapt.
First we get all changed files in the last merged PR for our branch with
git log -m -1 --name-only --pretty="format:" ${process.env.CIRCLE_SHA1}
CIRCLE_SHA1 is the SHA of the last merge commit, i.e featurebranch -> master
Then we get all the function filenames from our /functions/ directory and use
madge to generate an array of all the dependencies those functions have.
Next we go trough all changed files that we got from git and check if their filename is part of the dependency array for a sepcific cloud function, if so we add the cloudfunction to another array.
once this is done we pretty much have an array from all cloudfunctions that have been affected by the change of a specific file that we now can map to their actual cloud function names for deployment.
Now instead of always deploying 75 cloudfunctions which takes 45 minutes we only deploy maybe 20.