Amplify not cleaning up after app deletion - aws-amplify

When I create an app (using the console), I noticed that it creates some lambda functions with names starting with 'amplify-'. When I delete the app, these lambda functions remain. Is this normal behaviour? I've created and deleted several apps, now, I don't know which lambda belongs to which app, so I am not sure which one to clean up.

Related

Firebase Cloud Functions -- package all in a single VS Code project, or create multiple VS Code projects?

I am new to cloud functions and a little unclear about the way they are "containerized" after they are written and deployed to my project.
I have two quite different sets of functions. One set deals with image storage and firebase, another deals with some time consuming computations. They two sets (lets call them A and B) of functions use different node modules and have no dependecies on each other, except they both use Firestore.
My question is wehther it matters if I put all the functions in a single VS Code project, or if I should split them up in separate projects? One question is on the deployment side? (It seems like you deploy all the functions in the project when you run firebase deploy changes, even if some of the functions haven't changed, but probably more important is whether or not functions which don't need sharp or other other image manipulation packages are "containerized" together with other functions which maybe need stats packages and math related packages, and does it make any difference how they are organized into projects?
I realize this question is high level and not about specific code, but its not so clear to me from the various resources what is the appropriate way to bundle these two sets of unrelated cloud functions to not waste a lot of unecessary loading once theya re deployed out to Firestore.
Visual studio code project is simply a way to package your code. You can create 2 folder in your project, one for each set of function with their own firebase configuration.
Only the source repository can be a constraint here, especially if 2 different teams work on the code base and each one doesn't need to see the code of the other set of functions
In addition, if you open a VS code project with the 2 set of functions, it will take more time to load them and to lint them.
On Google Cloud side, each functions are deployed in their own container. Of course, because the packaging engine (Buildpack) doesn't know, the whole code is added inside the container. When the app start, the whole code is loaded. More you have code, longer will be the init.
If you have segregate your set of functions code in different folder in your project, only the code for the set A will be embedded in the container of functions A, and same thing for B.
Now, of course, if you put all the functions at the same level and the functions doesn't use the same data, the same code and so on, it's:
The mess to understand which function do what
The mess in the container to load too much things
So, it's not a great code base design, but it's beyond the "Google Cloud" topic, and an engineering choice.
Initially I was really confused on GCP project vs VS Code IDE project...
On a question about how cloud functions are "grouped" into containers during deployment - I strongly believe that each cloud function "image" is "deployed" into its own dedicated and separate container in the GCP. I think Guillaume described it absolutely correctly. At the same time, the "source" code packed into an "image" - might have a lot of redundancies, and there may be plenty of resources, which are not used by the given cloud function. it may be a good idea to minimize that.
I also would like to suggest, that neither development nor deployment process should depend on the client side IDE, and ideally the deployment should not happen from the client machine at all, to eliminate any local configuration/version variability between different developers. If we work together - I may use vi, and you VS Code, and Guillaume - GoLand, for example. There should not be any difference in deployment, as the deployment process should take all code from (origin/remote) git repository, rather than from the local machine.
In terms of "packaging" - for every cloud function it may be useful to "logically" consolidate all required code (and other files), so that all required files are archived together on deployment, and pushed into a dedicated GCS bucket. And exclude from such "archives" any not used (not required) files. In that case we might have many "archives" - one per cloud function. The deployment process should redeploy only modified "archives", and don't touch unmodified cloud functions.

Is there a way to add/modify functions to a running firebase emulator?

I am using a firebase emulator to develop and test cloud functions. Every time I modify an existing function or when I want to add a new function, I essentially shut down the emulator, deploy the functions, and then restart the emulator. In this process, I lose all the data in the local firestore database (as a part of the emulator). Is there a way to deploy functions to incorporate modifications to existing functions as well as to include new functions without shutting down the emulator?
It seems to depend what you are deploying. There is a not in Firebase documentation:
Note: Code changes you make during an active session are automatically
reloaded by the emulator. If your code needs to be transpiled
(TypeScript, React) make sure to do so before running the emulator.
So if you generally you can run the emulator and when you change the code without stopping it, with exception of the languages mentioned in the note.

Firebase functions in same project but multiple apps

In same Firebase project I have a node app with several functions and another app with only scheduled functions (because for some reason, I encountered side effects if deployed together in same app).
Each time I deploy the app with only scheduled functions, it tells me that other functions are not present in the source code (obviously) and asks me if I want to delete them.
Is there a way to tag functions as permanent and avoid each time to have to chose to not delete them ?
When you deploy Cloud Functions through the Firebase CLI, it expects that you pass it a index.js/index.ts that contains all the functions for that entire project.
There is no way to tag certain Cloud Functions as permanent. I usually explicitly tell Cloud Functions what functions I'm deploying in situations such as yours, with firebase deploy --only functions:function1,function2.
For more on this option, see the reference documentation on deploying specific functions. The option to group the functions sounds especially useful for your scenario, as you could group them by app.

Is it possible to fetch and use a file from cloud storage at when deploying a cloud function

I have a firebase function that makes use of a SQLite database (read-only) which is currently uploaded along with the function.
The problem is that the db file is quite large and gets uploaded every time the function is changed. Is there a way to fetch this file from cloud storage during the installation process (during firebase deploy) - without hard-coding the URL in the source files?
What you're trying to do is problematic because your code running in Cloud Functions may actually be running on any number of server instances, determined by the load on your project. As such, downloading a file once at the time of deployment isn't going to naturally affect all the instances that maybe created or destroyed at any given moment.
It's far better to keep doing what you're doing, and include your extra data during deployment. When a new instance is spun up to handle events for your function, the file will be immediately ready to help service requests.

Travis and Firebase: deploy only changed functions

I'm using Travis to automatically deploy my Firebase hosted website and cloud functions as I push to GitHub, as detailed here. However, even for my small website with a limited amount of cloud functions, deploying all of the functions takes quite a long time. Were I deploying manually, I would be able to use --only to specify precisely those functions that I actually changed. Is there a way to make this information available to Travis, so that only the necessary functions are rebuilt?
https://m.youtube.com/watch?v=iyGHW4UQ_Ts
min 30 and following
This guy solves the problem by copying all functions to a cloud bucket and then making a diff for every file. This works well if all your logic is in one file. But this is not what you want for larger projects. For my own project i used webpack to create one file for each function that includes the imports. then i generate a md5 hash for that file and save it to a functions-lock.json. with the next run i can easily check against the old hash value and only deploy the changed functions. The ci should manage the state of the lock file by uploading it to the cloud or doing some git magic
Unfortunately this isn't going to be simple to do -- the Firebase CLI deploys all of your functions because it's next-to-impossible to just analyze the code and figure out which functions are impacted (since you can require other files, you might have updated dependencies but no files changed, etc.).
One thing I can think of that might be a hack would be to have named branches for functions or groups of functions. Then you could git push to the branch of the specific function you want to deploy, and have a script that uses the branch name as a signal to pass the --only functions:<fnName> to the firebase deploy command. That's not the most glamorous solution, but, depending on how much this bugs you, it might help.
So this is a bit late but the long deployment times have bothered us for a while now.
Our solution is based on CircleCI but it should be possible to adapt.
First we get all changed files in the last merged PR for our branch with
git log -m -1 --name-only --pretty="format:" ${process.env.CIRCLE_SHA1}
CIRCLE_SHA1 is the SHA of the last merge commit, i.e featurebranch -> master
Then we get all the function filenames from our /functions/ directory and use
madge to generate an array of all the dependencies those functions have.
Next we go trough all changed files that we got from git and check if their filename is part of the dependency array for a sepcific cloud function, if so we add the cloudfunction to another array.
once this is done we pretty much have an array from all cloudfunctions that have been affected by the change of a specific file that we now can map to their actual cloud function names for deployment.
Now instead of always deploying 75 cloudfunctions which takes 45 minutes we only deploy maybe 20.

Resources