I'm currently working on a project where there are two different storage buckets (one in US central and another in S. Korea).
It would be more efficient both cost-wise and speed-wise to locate the functions for updating storage usage in same location as storage.
But, now I want to manage the function at two different location. This function is meant to update the Firestore whenever new image is uploaded in a storage.
This is the code that I thought would work (but actually don't)
exports.incrementUsedSize = functions.region('us-central1').storage.object().onFinalize
exports.incrementUsedSizeKr = functions.region('asia-northeast3').storage.object().onFinalize
But both of these are called whenever storage at US or S. Korea is updated.
Is there a way to make these functions work at two locations without interfering each other?
I mean, is there a way to restrict the scope of the function to the storage at specific location?
As per the documentation:
To set the region where a function runs, set the region parameter in the function
definition as shown:
exports.incrementUsedSize = functions
.region('us-central1')
.storage
.object()
.onFinalize((object) => {
// ...
});
The .region() sets the location for your cloud function and has nothing to do with the bucket.
However, as you say the buckets are different and not a single "dual-region" bucket, you should be able to deploy this way:
gcloud functions deploy incrementUsedSize \
--runtime nodejs10 \
--trigger-resource YOUR_TRIGGER_BUCKET_NAME \
--trigger-event google.storage.object.finalize
by specifying the trigger bucket you should be able to restrict the invocation trigger. More details in the documentation here
If your question refers to Firebase Cloud Function then you can specify the bucket name as mentioned here:
exports.incrementUsedSize = functions.storage.bucket('bucketName').object().onFinalize(async (object) => {
// ...
});
Related
In my app, I'm using RTDB multiple instances, together with RTDB management APIs, to try to dynamically balance the load.
So my point is, because I don't know future load, I will just start with one RTDB instance, and create multiple ones if a specified threshold of usage is exceeded.
In my case, this requires the following:
create new RTDB instance through management API
apply rules to that RTDB instance
apply cloud functions to that RTDB instance
1 and 2 could be programmatically done inside a cloud function, but I'm having troubles with 3.
Is this possible?
Are there any workaround or future plans to support 3?
I'm thinking about two options: deploy a function from a function, or allow RTDB triggers to apply to every instances.
As you can check here in the management API documentation, the response body of the create method returns a DatabaseInstance object which has the following structure:
{
"name": string,
"project": string,
"databaseUrl": string,
"type": enum (DatabaseInstanceType),
"state": enum (State)
}
So, you can get the databaseUrl value, store it somewhere in your code and send it to your cloud function as a parameter later, assuming it is a http function. In the function all you have to do is use the following code to access it:
let app = admin.app();
let ref = app.database('YOUR_SECOND_INSTANCE_URL_HERE').ref();
EDIT
For triggered functions this is not possible, since you would need to know the instances names or URL when deploying to function to apply the trigger to them. If you already have the instances names you could try doing something like this community answer, although I am not sure it will suit your app's needs.
My Firebase Cloud Function looks like:
exports.sendRequest = functions.firestore
.document('Links/{p_id}/Accepted/{uid}')
.onCreate((snap, context) => {
console.log('----------------start function--------------------')
return null;
})
It is being deployed it to
us-central1
How can I change its location to
asia-east2 ?
This is covered in the documentation for best practices for changing a region:
If you are changing the specified regions for a function that's
handling production traffic, you can prevent event loss by performing
these steps in order:
Rename the function, and change its region or regions as desired.
Deploy the renamed function, which results in temporarily running the same code in both sets of regions.
Delete the previous function.
This procedure is pretty straightforward.
Is there a way of defining which region to use when deploying a function to firebase using either the firebase.json or the .firebaserc files? The documentation around this doesn't seem to be clear.
Deploying the firebase functions using GitHub Actions and want to avoid adding the region in the code itself if possible.
Suggestions?
It's not possible using the configurations you mention. The region must be defined synchronously in code using the provided API. You could perhaps pull in an external JSON file using fs.readFileSync() at the global scope of index.js, parse its contents, and apply them to the function builder. (Please note that you have to do this synchronously - you can't use a method that returns a promise.)
I've target this problem using native functions config.
Example:
firebase functions:config:set functions.region=southamerica-east1
Then in functions declaration I'd as follows:
const { region } = functions.config().functions;
exports.myFunction = functions.region(region).https.onCall((data) => {
// do something
});
That way, for each time I need a different region, it would only need a new config set.
I have been looking around for the ways to retrieve the bucket name in Firebase functions.
The documentation says you can do something like:
functions.storage.bucket("bucket_name").object()...
However, in all examples I have seen the "bucket name" is hard-coded. In my project, images are stored in the buckets named as user-ids. So when a write event is triggered, I want to retrieve this user id. Is there a way to do it? Something like this (below)?
exports.optimizeImages = functions.storage.bucket("{uid}").object().onFinalize(async (object) => {
const uid = ???
...
})
When you declare a storage trigger, you are only attaching it to a single bucket. If you want to trigger on multiple buckets, you have to declare multiple triggers. As such, each trigger function should always know which bucket it was fired for - you can simply hard coding it in the function implementation (it will be the same as what you specified in the function builder - just reuse that value).
If you must share the exact same function implementation with multiple triggers on multiple buckets, you can certainly parse the object.bucket property. That seems like a decent way to go.
I've created a project in Firebase Storage with multiple buckets, something like this:
Project Storage:
Bucket1
File
File
Bucket2
File
Bucket3
File
File
I want to have something like this:
exports.fun = functions.storage.object().onFinalize(){} for bucket1
exports.fun = functions.storage.object().onFinalize(){} for bucket3
Is this possible?
And how can I achieve it?
The documentation is pretty clear:
Use functions.storage to create a function that handles Cloud Storage
events. Depending on whether you want to scope your function to a
specific Cloud Storage bucket or use the default bucket, use one of
the following:
functions.storage.object() to listen for object changes on the default storage bucket.
functions.storage.bucket('bucketName').object() to listen for object changes on a specific bucket.
You can use the second form to specify the bucket.