Setting deploy region in firebase configuration file - firebase

Is there a way of defining which region to use when deploying a function to firebase using either the firebase.json or the .firebaserc files? The documentation around this doesn't seem to be clear.
Deploying the firebase functions using GitHub Actions and want to avoid adding the region in the code itself if possible.
Suggestions?

It's not possible using the configurations you mention. The region must be defined synchronously in code using the provided API. You could perhaps pull in an external JSON file using fs.readFileSync() at the global scope of index.js, parse its contents, and apply them to the function builder. (Please note that you have to do this synchronously - you can't use a method that returns a promise.)

I've target this problem using native functions config.
Example:
firebase functions:config:set functions.region=southamerica-east1
Then in functions declaration I'd as follows:
const { region } = functions.config().functions;
exports.myFunction = functions.region(region).https.onCall((data) => {
// do something
});
That way, for each time I need a different region, it would only need a new config set.

Related

How to not to expose some variables from script setup to the template?

As it mentions here all the top level bindinigs introduced in script setup are exposed to template.
Question: How to exclude some of them? Something like private vairables which are only available inside script setup but not then passed to template
There is no way to do that with script setup. For advanced use cases use a setup function, where you can choose what to expose:
https://vuejs.org/api/composition-api-setup.html
You can't hide things on the client-side, you'll need a server (SSR with Nuxt for example) or a middleware of some sort (backend proxy, serverless function, etc...).
Here are two answers related to this question:
How to make a private call when using SSR Nuxt?
CORS error and secret keys in network tab
If by private, you mean OOP's private variables, then it's another subject: https://www.sitepoint.com/javascript-private-class-fields/#privateclassfields
If you want to only not expose the variables, then you'll need to use regular setup() into a script() and return only some of them. I don't think that there is a hide() function of some sort in Vue3.
Have you thought about using defineExpose to define only what you want to expose?
<script setup>
import { ref } from 'vue'
const a = 1
const b = ref(2)
defineExpose({
a,
b
})
</script>
ref. https://vuejs.org/api/sfc-script-setup.html#defineexpose

Different Google Cloud Function Call For Different Storage?

I'm currently working on a project where there are two different storage buckets (one in US central and another in S. Korea).
It would be more efficient both cost-wise and speed-wise to locate the functions for updating storage usage in same location as storage.
But, now I want to manage the function at two different location. This function is meant to update the Firestore whenever new image is uploaded in a storage.
This is the code that I thought would work (but actually don't)
exports.incrementUsedSize = functions.region('us-central1').storage.object().onFinalize
exports.incrementUsedSizeKr = functions.region('asia-northeast3').storage.object().onFinalize
But both of these are called whenever storage at US or S. Korea is updated.
Is there a way to make these functions work at two locations without interfering each other?
I mean, is there a way to restrict the scope of the function to the storage at specific location?
As per the documentation:
To set the region where a function runs, set the region parameter in the function
definition as shown:
exports.incrementUsedSize = functions
.region('us-central1')
.storage
.object()
.onFinalize((object) => {
// ...
});
The .region() sets the location for your cloud function and has nothing to do with the bucket.
However, as you say the buckets are different and not a single "dual-region" bucket, you should be able to deploy this way:
gcloud functions deploy incrementUsedSize \
--runtime nodejs10 \
--trigger-resource YOUR_TRIGGER_BUCKET_NAME \
--trigger-event google.storage.object.finalize
by specifying the trigger bucket you should be able to restrict the invocation trigger. More details in the documentation here
If your question refers to Firebase Cloud Function then you can specify the bucket name as mentioned here:
exports.incrementUsedSize = functions.storage.bucket('bucketName').object().onFinalize(async (object) => {
// ...
});

Firebase RTDB load balancing with cloud functions

In my app, I'm using RTDB multiple instances, together with RTDB management APIs, to try to dynamically balance the load.
So my point is, because I don't know future load, I will just start with one RTDB instance, and create multiple ones if a specified threshold of usage is exceeded.
In my case, this requires the following:
create new RTDB instance through management API
apply rules to that RTDB instance
apply cloud functions to that RTDB instance
1 and 2 could be programmatically done inside a cloud function, but I'm having troubles with 3.
Is this possible?
Are there any workaround or future plans to support 3?
I'm thinking about two options: deploy a function from a function, or allow RTDB triggers to apply to every instances.
As you can check here in the management API documentation, the response body of the create method returns a DatabaseInstance object which has the following structure:
{
"name": string,
"project": string,
"databaseUrl": string,
"type": enum (DatabaseInstanceType),
"state": enum (State)
}
So, you can get the databaseUrl value, store it somewhere in your code and send it to your cloud function as a parameter later, assuming it is a http function. In the function all you have to do is use the following code to access it:
let app = admin.app();
let ref = app.database('YOUR_SECOND_INSTANCE_URL_HERE').ref();
EDIT
For triggered functions this is not possible, since you would need to know the instances names or URL when deploying to function to apply the trigger to them. If you already have the instances names you could try doing something like this community answer, although I am not sure it will suit your app's needs.

Google Cloud Functions onFinalize() context

How can I pass some metadata along with an object when uploading it to a bucket?
I'm using a separate bucket for image manipulations, since I can't trigger Cloud Functions only within a specific folder inside my working ones, and thus I need to get that edited image back from that service bucket and place it then appropriately. Sounds very trivial but it turned out to be not.
That being said, I tried to get context by .object().onFinalize((object, context) => {}:
{ eventId: '226356658372982',
timestamp: '2018-10-11T09:17:07.052Z',
eventType: 'google.storage.object.finalize',
resource:
{ service: 'storage.googleapis.com',
name: 'projects/_/buckets/bucket/objects/image.jpg',
type: 'storage#object' },
params: {} }
That wasn't very helpful though.
I can think of using object.name conditionals inside working buckets as a last resort but there should be a more civilized way to handle such situations.
If you want your storage trigger to handle only certain files added to your bucket, you will have to write code in your function to determine if it's a file that you want to process. This is commonly done by looking at the object's name, as you pointed out.
If you don't want to do that, you can attach metadata to a file at the time of upload. Since you haven't said which language or environment you're using to upload the file, I'll point you to the node.js documentation for upload(). Note that there is a metadata property of the optional options argument. Other platforms have a similar way of specifying metadata during upload.
The bottom line is that you will need to figure out in your function if you want to handle the file that's been finalized.

Is it possible to scope Firebase functions to a folder inside a storage bucket?

I have tested Firebase functions for storage successfully. However, I havn't seen anywhere a hint how to only invoke the function when a file is added into a folder inside my bucket. The only hint I have seen about scoping the function is for different buckets here.
Is it possible to scope the function to a folder inside my bucket , if yes how?
Or would I need to have multiple buckets instead of folders to separate different tasks.
firebaser here
There is currently no way to trigger Cloud Functions only for writes in a specific folder in Cloud Storage. If you want to limit the triggering to a subset of the files in your project, putting them in a separate bucket is currently the only way to accomplish that.
As a workaround, you can write metadata about the image to a supported database (Realtime Database or Cloud Firestore), and use that to trigger a Cloud Function that transforms the file. This is what I usually do, as it also allows me to capture the metadata in a format that can be queried.
You may check inside the function. Get the "filePath" or "fileDir" on your function and check if it is the folder you want.
const path = require('path');
const filePath = event.data.name;
const fileDir = path.dirname(filePath);
//if you want to check the posts folder only then:
if (fileDir != 'posts') {
console.log('This is not in post folder');
return null;
}
Please note that Google Cloud Storage works on a flat filesystem. So practically there are no directories. If you're storing a file like /users/profile_pictures/photo.jpg it is basically all part of the file name. So in reality, there are no directories. There are just files. Which is why there cannot be a trigger on a directory per se. Of course you can work that around by checking the name of the file itself and see whether its start matches a particular string or not.
export const generateThumbnailTrigger = functions.storage.object().onFinalize(async (object) => {
const filePath = object.name;
if (filePath?.startsWith('temp/')) {
console.log('start doing something with the file');
} else {
return false;
}
});

Resources