This question already has answers here:
Can you call out to FFMPEG in a Firebase Cloud Function
(7 answers)
Closed 2 years ago.
I want to use ffmpeg as described here:
https://github.com/firebase/functions-samples/blob/master/ffmpeg-convert-audio/functions/index.js
ffmpeg-static contains binaries. I am using Windows 10 and want to upload the code using firebase deploy.
However I do not understand what I need to do to get this to work. The binaries that will be installed on my PC are of course different from those needed by the cloud firebase (https://www.npmjs.com/package/ffmpeg-static).
How can I do this?
When you deploy Cloud Functions, the deployment process doesn't include the contents of node_modules from your local machine. Google infrastructure will effectively run npm install for you and rebuild the entire thing in an environment that matches your target Cloud Functions runtime. As such, you will get the correct binaries with your function. It doesn't matter that you developed on Windows, and there is (in theory) nothing you have to do to get the function to work, as long as you wrote platform-independent code.
If you have a specific problem, your question should show what, including any code and error messages one would use for debugging.
Related
This question already has an answer here:
Karate summary reports not showing all tested features after upgrade to 1.0.0
(1 answer)
Closed 1 year ago.
I've recently discovered the karate framework. Great work! I've extended the standalone jar with custom java helpers to be able to access DB and ssh servers.
I transfer logs and results files from the ssh server to the server in which I run karate.
I'd like to store these files aside the HTML report. But, as long as the test runs, the report folder has a temporary name. It is renamed at then end of the run.
Is there a way to get this temporary name (or path) to be able to copy files into it?
Best regards,
Patrice from France
This is a bit of a surprise for me, because as far as I know the folder is target/karate-reports. Maybe this is some weird thing that happens only on Windows. I'd request you to see if you can debug and contribute code to the project, that would be really great !
Since you are writing Java code and adding it to the classpath I guess, then you should probably use the Runner API that gives you more control, and also the option to disable Karate trying to "backup" the existing old reports folder. This is explained here: https://stackoverflow.com/a/66685944/143475
There is also an option to customize the name of the reports folder: reportDir().
For this kind of control, we recommend using Karate in Java / Maven project fashion, but you can decide what works best for you.
This question already has answers here:
Get code from firebase console which I deployed earlier
(6 answers)
Closed 1 year ago.
A time ago i've created functions (API) on my firebase, i would like to add more but I don't have the files locally, neither on github.
How can I get them from the firebase server?
You can download the source of the scripts from Google Cloud Functions:
https://console.cloud.google.com/functions/list
You can then access each function independently here.
This will take you to a window with several tabs.
Click the SOURCE tab and then Download Zip
*Depending on your build, it may be missing some components and modules but it is a way to recover them in a practical manor.
In the same way that a cloud function can run the ffmpeg, is possible download and run aria2c? If yes, how?
PS. Cloud Run isn't an option right now.
Edit: Something like this https://blog.qbatch.com/aws-lambda-custom-binaries-support-available-for-rescue-239aab820d60
Executing custom binaries like aria2c in the runtime are not supported in Cloud Functions.
You can find a hacky solution here: Can you call out to FFMPEG in a Firebase Cloud Function This involves having a statically linked binary (so you might need to recompile aria2c as I'm assuming it won't be statically linked by default and it'll rely on more system packages like libc, libxxxx...) and bundling this library to your function deployment fackage.
You should really consider using Cloud Run for this use case. Cloud Run gives you the flexibility of creating your own container image that can include the binaries and libraries you want.
You can find a tutorial that bundles custom binaries on Cloud Run here: https://cloud.google.com/run/docs/tutorials/system-packages
I have noticed a strange behaviour in FireStore Cloud Functions that if try to break my code up into separate files, I start to get this error:
info: Worker for app closed due to file changes.
I just created a simple express server and hosted it in a cloud function and was emulating it locally as described here.
https://www.youtube.com/watch?v=LOeioOKUKI8&t=244s
I even wrote tests for the same. Everything was working fine until I split the source code of my express app into individual routes (contained in separate .js files).
The only thing that message means is that the emulator noticed when a code file changed, and performed a hot reload of that code. Note that it's just an "info" level message, not an error and not even a warning.
If your project is not working the way you expect, then edit your question with the symptoms you're observing, along with the code.
I'm using Travis to automatically deploy my Firebase hosted website and cloud functions as I push to GitHub, as detailed here. However, even for my small website with a limited amount of cloud functions, deploying all of the functions takes quite a long time. Were I deploying manually, I would be able to use --only to specify precisely those functions that I actually changed. Is there a way to make this information available to Travis, so that only the necessary functions are rebuilt?
https://m.youtube.com/watch?v=iyGHW4UQ_Ts
min 30 and following
This guy solves the problem by copying all functions to a cloud bucket and then making a diff for every file. This works well if all your logic is in one file. But this is not what you want for larger projects. For my own project i used webpack to create one file for each function that includes the imports. then i generate a md5 hash for that file and save it to a functions-lock.json. with the next run i can easily check against the old hash value and only deploy the changed functions. The ci should manage the state of the lock file by uploading it to the cloud or doing some git magic
Unfortunately this isn't going to be simple to do -- the Firebase CLI deploys all of your functions because it's next-to-impossible to just analyze the code and figure out which functions are impacted (since you can require other files, you might have updated dependencies but no files changed, etc.).
One thing I can think of that might be a hack would be to have named branches for functions or groups of functions. Then you could git push to the branch of the specific function you want to deploy, and have a script that uses the branch name as a signal to pass the --only functions:<fnName> to the firebase deploy command. That's not the most glamorous solution, but, depending on how much this bugs you, it might help.
So this is a bit late but the long deployment times have bothered us for a while now.
Our solution is based on CircleCI but it should be possible to adapt.
First we get all changed files in the last merged PR for our branch with
git log -m -1 --name-only --pretty="format:" ${process.env.CIRCLE_SHA1}
CIRCLE_SHA1 is the SHA of the last merge commit, i.e featurebranch -> master
Then we get all the function filenames from our /functions/ directory and use
madge to generate an array of all the dependencies those functions have.
Next we go trough all changed files that we got from git and check if their filename is part of the dependency array for a sepcific cloud function, if so we add the cloudfunction to another array.
once this is done we pretty much have an array from all cloudfunctions that have been affected by the change of a specific file that we now can map to their actual cloud function names for deployment.
Now instead of always deploying 75 cloudfunctions which takes 45 minutes we only deploy maybe 20.