Create multiple onFinalize triggers for the buckets in the same project - firebase

I've created a project in Firebase Storage with multiple buckets, something like this:
Project Storage:
Bucket1
File
File
Bucket2
File
Bucket3
File
File
I want to have something like this:
exports.fun = functions.storage.object().onFinalize(){} for bucket1
exports.fun = functions.storage.object().onFinalize(){} for bucket3
Is this possible?
And how can I achieve it?

The documentation is pretty clear:
Use functions.storage to create a function that handles Cloud Storage
events. Depending on whether you want to scope your function to a
specific Cloud Storage bucket or use the default bucket, use one of
the following:
functions.storage.object() to listen for object changes on the default storage bucket.
functions.storage.bucket('bucketName').object() to listen for object changes on a specific bucket.
You can use the second form to specify the bucket.

Related

Different Google Cloud Function Call For Different Storage?

I'm currently working on a project where there are two different storage buckets (one in US central and another in S. Korea).
It would be more efficient both cost-wise and speed-wise to locate the functions for updating storage usage in same location as storage.
But, now I want to manage the function at two different location. This function is meant to update the Firestore whenever new image is uploaded in a storage.
This is the code that I thought would work (but actually don't)
exports.incrementUsedSize = functions.region('us-central1').storage.object().onFinalize
exports.incrementUsedSizeKr = functions.region('asia-northeast3').storage.object().onFinalize
But both of these are called whenever storage at US or S. Korea is updated.
Is there a way to make these functions work at two locations without interfering each other?
I mean, is there a way to restrict the scope of the function to the storage at specific location?
As per the documentation:
To set the region where a function runs, set the region parameter in the function
definition as shown:
exports.incrementUsedSize = functions
.region('us-central1')
.storage
.object()
.onFinalize((object) => {
// ...
});
The .region() sets the location for your cloud function and has nothing to do with the bucket.
However, as you say the buckets are different and not a single "dual-region" bucket, you should be able to deploy this way:
gcloud functions deploy incrementUsedSize \
--runtime nodejs10 \
--trigger-resource YOUR_TRIGGER_BUCKET_NAME \
--trigger-event google.storage.object.finalize
by specifying the trigger bucket you should be able to restrict the invocation trigger. More details in the documentation here
If your question refers to Firebase Cloud Function then you can specify the bucket name as mentioned here:
exports.incrementUsedSize = functions.storage.bucket('bucketName').object().onFinalize(async (object) => {
// ...
});

Setting deploy region in firebase configuration file

Is there a way of defining which region to use when deploying a function to firebase using either the firebase.json or the .firebaserc files? The documentation around this doesn't seem to be clear.
Deploying the firebase functions using GitHub Actions and want to avoid adding the region in the code itself if possible.
Suggestions?
It's not possible using the configurations you mention. The region must be defined synchronously in code using the provided API. You could perhaps pull in an external JSON file using fs.readFileSync() at the global scope of index.js, parse its contents, and apply them to the function builder. (Please note that you have to do this synchronously - you can't use a method that returns a promise.)
I've target this problem using native functions config.
Example:
firebase functions:config:set functions.region=southamerica-east1
Then in functions declaration I'd as follows:
const { region } = functions.config().functions;
exports.myFunction = functions.region(region).https.onCall((data) => {
// do something
});
That way, for each time I need a different region, it would only need a new config set.

Is there a way to get bucket name in Firebase functions?

I have been looking around for the ways to retrieve the bucket name in Firebase functions.
The documentation says you can do something like:
functions.storage.bucket("bucket_name").object()...
However, in all examples I have seen the "bucket name" is hard-coded. In my project, images are stored in the buckets named as user-ids. So when a write event is triggered, I want to retrieve this user id. Is there a way to do it? Something like this (below)?
exports.optimizeImages = functions.storage.bucket("{uid}").object().onFinalize(async (object) => {
const uid = ???
...
})
When you declare a storage trigger, you are only attaching it to a single bucket. If you want to trigger on multiple buckets, you have to declare multiple triggers. As such, each trigger function should always know which bucket it was fired for - you can simply hard coding it in the function implementation (it will be the same as what you specified in the function builder - just reuse that value).
If you must share the exact same function implementation with multiple triggers on multiple buckets, you can certainly parse the object.bucket property. That seems like a decent way to go.

Cloud function trigger for specific filename or pattern in firebase? [duplicate]

I have tested Firebase functions for storage successfully. However, I havn't seen anywhere a hint how to only invoke the function when a file is added into a folder inside my bucket. The only hint I have seen about scoping the function is for different buckets here.
Is it possible to scope the function to a folder inside my bucket , if yes how?
Or would I need to have multiple buckets instead of folders to separate different tasks.
firebaser here
There is currently no way to trigger Cloud Functions only for writes in a specific folder in Cloud Storage. If you want to limit the triggering to a subset of the files in your project, putting them in a separate bucket is currently the only way to accomplish that.
As a workaround, you can write metadata about the image to a supported database (Realtime Database or Cloud Firestore), and use that to trigger a Cloud Function that transforms the file. This is what I usually do, as it also allows me to capture the metadata in a format that can be queried.
You may check inside the function. Get the "filePath" or "fileDir" on your function and check if it is the folder you want.
const path = require('path');
const filePath = event.data.name;
const fileDir = path.dirname(filePath);
//if you want to check the posts folder only then:
if (fileDir != 'posts') {
console.log('This is not in post folder');
return null;
}
Please note that Google Cloud Storage works on a flat filesystem. So practically there are no directories. If you're storing a file like /users/profile_pictures/photo.jpg it is basically all part of the file name. So in reality, there are no directories. There are just files. Which is why there cannot be a trigger on a directory per se. Of course you can work that around by checking the name of the file itself and see whether its start matches a particular string or not.
export const generateThumbnailTrigger = functions.storage.object().onFinalize(async (object) => {
const filePath = object.name;
if (filePath?.startsWith('temp/')) {
console.log('start doing something with the file');
} else {
return false;
}
});

Is it possible to scope Firebase functions to a folder inside a storage bucket?

I have tested Firebase functions for storage successfully. However, I havn't seen anywhere a hint how to only invoke the function when a file is added into a folder inside my bucket. The only hint I have seen about scoping the function is for different buckets here.
Is it possible to scope the function to a folder inside my bucket , if yes how?
Or would I need to have multiple buckets instead of folders to separate different tasks.
firebaser here
There is currently no way to trigger Cloud Functions only for writes in a specific folder in Cloud Storage. If you want to limit the triggering to a subset of the files in your project, putting them in a separate bucket is currently the only way to accomplish that.
As a workaround, you can write metadata about the image to a supported database (Realtime Database or Cloud Firestore), and use that to trigger a Cloud Function that transforms the file. This is what I usually do, as it also allows me to capture the metadata in a format that can be queried.
You may check inside the function. Get the "filePath" or "fileDir" on your function and check if it is the folder you want.
const path = require('path');
const filePath = event.data.name;
const fileDir = path.dirname(filePath);
//if you want to check the posts folder only then:
if (fileDir != 'posts') {
console.log('This is not in post folder');
return null;
}
Please note that Google Cloud Storage works on a flat filesystem. So practically there are no directories. If you're storing a file like /users/profile_pictures/photo.jpg it is basically all part of the file name. So in reality, there are no directories. There are just files. Which is why there cannot be a trigger on a directory per se. Of course you can work that around by checking the name of the file itself and see whether its start matches a particular string or not.
export const generateThumbnailTrigger = functions.storage.object().onFinalize(async (object) => {
const filePath = object.name;
if (filePath?.startsWith('temp/')) {
console.log('start doing something with the file');
} else {
return false;
}
});

Resources