Writing a firebase/google cloud function, and need to store an environment value for use across multiple function calls. That value expires and needs to be re-fetched on occasion and updated in production.
I'm looking for a lightweight option for that. Seems all the advice I can find is that you need to spin up a VPC and create a dedicated Redis instance... or you need to create a cloud database and store it there... I just need to save a simple string, and it seems like an awful lot of infrastructure to do that.
One would think environment variables would work, but you can only set on the command line and they are only refreshed on deploy...
To store environment data, you can use the firebase
functions:config:set command.
To get environment data, you can use the functions.config() function.
See https://firebase.google.com/docs/functions/config-env.
So, is there a way to update/set a value in my code? I cannot rely on the command line to update it as it expires, like a cron to update and redeploy.
In Google Apps Script, for example, I'd just use the 'cache' helper service and store the value for a few hours. Any equivalent cache available to cloud functions without resorting to storing on GCS or in a database (it's a single, simple token string...)? Thanks.
Cloud Functions does not offer any form of shared environment variables between functions. You will need to look to an external source such as Cloud Secrets Manager, Cloud Storage or one of the databases. I use both Cloud Storage and Datastore for this feature. I am now looking into Cloud Secrets Manager as my software usually has secrets as well.
Related
Using Google Firebase Functions as a backend of the small application.
Functions are accessing to the Firestore and Realtime database, therefore they need service account credentials file.
On the other hand, I'm trying to automate the deployment of the functions using Github Actions.
Currently I places the credentials file inside the repository. I know that it's not secure.
What is the proper way of storing service account credentials file in this case?
Firebase projects, are, in effect, Google Cloud Platform projects.
More specifically, when you create a Firebase project, an associated Google Cloud Platform project is created for it.
Therefore the process for storing credentials is the same as in Cloud Platform, which is to say in a file, somewhere relatively safe.
This file should be accessible to your Function if it is required, and should either have its path specified as part of an environment variable or explicitly declared in code.
You are already storing it the proper way, because the improper way would be to insert the contents of the JSON file directly into code.
To prevent others from seeing the contents of the JSON file, simply set the respository as private.
does anyone know how to enable the Message Ordering property on a subscription when working with Firebase and NodeJs?
If I navigate to the GCP console and open the queue's subscription detail, I can change the parameter manually but I'd rather have it done when my functions deploy.
This link (https://cloud.google.com/pubsub/docs/ordering#enabling_message_ordering) also talks about a parameter that can be provided when creating a subscription but that doesn't seem to be available the way I usually create/deploy my functions.
GCP Documentation
Own Code
It sounds like you're asking if pubsub functions deployed with the Firebase CLI (and not gcloud) can request ordering. Currently, this is not possible. When you deploy a pubsub function with the Firebase CLI, it will automatically create the pubsub topic for you. However, there is no option to set the ordering setting.
If you would like see this implemented, you should file a feature request on the Firebase CLI GitHub and/or contact Firebase support directly.
Alternatively, you can switch to use gcloud to establish the trigger and create its pubsub topic with your customizations.
You should note that enabling ordering does not necessarily ensure that function invocations will be perfectly serialized with respect to incoming messages. Cloud Functions is designed for incoming events to be handled in parallel as fast as possible by using multiple server instances as needed. There is no guarantee that pubsub messages will be processed fully in order. The ordering only applies to pubsub topic consumer code that runs in a single process. You could try to set the max instances of a function to 1, but that is not a hard guarantee.
If you need fully ordered serialized execution of some code in response to pubsub events, maybe Cloud Functions isn't the product you want to use. Maybe a single App Engine or Compute Engine instance would work better.
I recently set up the firebase emulators to run my cloud functions locally. After setup, the cloud functions triggered successfully but any write to the real-time database does not reflect in the corresponding local database emulator UI. e.g when I use
snapshot.after.ref.parent.child('busy').set(true)
So I tried exporting and importing the real-time data and I discovered that all database write from cloud functions was being saved in a default database with name localhost:4000/?ns=pick2-c468b-default-rtdb and not the database that is triggering the write event.
Is this the default behaviour of the emulator and how do I go about changing or fixing it?
Writes to snapshot.after.ref should go to the database that triggered the Function.
If that is not happening for you, I recommend reporting a bug with a minimal repro on the repo (which should include an entire Cloud Function, and not just the line that you think is going wrong.
I am using Firebase cloud functions for a project, and some of those functions fetch data from a firebase database.
I'd like to speed up some of these queries by leveraging a LRU cache, but it's not clear whether this is possible with Firebase cloud functions.
Does anyone know if the Firebase cloud functions have access to any kind of cache / semi-persistent memory access? Any help others can offer on this question would be hugely helpful!
If you want to share any sort of persistent data between function invocations, you will have to use another product, and code your function to use that. Cloud Functions themselves only have immedaite access to the memory on the server instance that's running a particular invocation, and there could be many server instances all running functions at the same time.
If you're OK with maintaining a small local cache in memory on each instance, that's fine. But you will have problem with this if you allow the cache to get so large that a function can't do its work with more limited memory. You should also expect the cache to get reset whenever a server instances get deallocated, which happens outside of your control.
Since you're working in Google Cloud, consider using a product such as Memorystore to implement your cache.
I have created a Cloud Function which:
Receives some data
Calls a google API to verify the data are correct
Now, to call the google API I need to authenticate first. This will give me an access token (that expires) that can be used for subsequent calls.
I'm wondering where can I save this access token so that other invocations of the function can "see" it and use it. I know I cannot use a "global variable" as the function may run on different machines.
The obvious solution is to write it in Realtime Database... But I don't really like it, as someone could get access to it... Does Cloud Functions provide an object or something where I can write data into?
Cloud Functions are intended to be stateless, and there is no persistent storage it provides. Also, Cloud Functions could start up many server instances to handle your functions, so you will have to find a way to share data between those instances as they come and go.
Storing your token in the Realtime Database is probably your best option. As long as you're using security rules correctly, no one will be able to read it.