Firebase initialize sdk with service account from cloud storage - firebase

I'm trying to initialize our sdk following the instructions here:
https://firebase.google.com/docs/admin/setup#initialize-sdk
Basically, I created a service account and stored the accompanying json for that service account in cloud storage. Great. Now the example says I should reference that Json by:
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the
file path of the JSON file that contains your service account key
But, we aren't storing the file local to these cloud functions, we are storing it in cloud storage. How do we specify a path to a non-local file?

The Admin SDK does not support remote configurations. It needs to be local, either on disk or in memory. You will have to write code to download the config from the storage bucket somehow, then feed that to the SDK.

In order to initialize the Admin SDK locally (not deployed) you need to download the service account json file and make it accessible by:
Setting an env variable GOOGLE_APPLICATION_CREDENTIALS set to the path "/home/user/serviceaccount.json".
Include the service account file in the functions file and access it via relative path programmatically.
Once deployed the there is a runtime service account PROJECT_ID#appspot.gserviceaccount.com and the environment variables are already set.

Related

Where to store Google Service Account Key while using Google Firebase Functions

Using Google Firebase Functions as a backend of the small application.
Functions are accessing to the Firestore and Realtime database, therefore they need service account credentials file.
On the other hand, I'm trying to automate the deployment of the functions using Github Actions.
Currently I places the credentials file inside the repository. I know that it's not secure.
What is the proper way of storing service account credentials file in this case?
Firebase projects, are, in effect, Google Cloud Platform projects.
More specifically, when you create a Firebase project, an associated Google Cloud Platform project is created for it.
Therefore the process for storing credentials is the same as in Cloud Platform, which is to say in a file, somewhere relatively safe.
This file should be accessible to your Function if it is required, and should either have its path specified as part of an environment variable or explicitly declared in code.
You are already storing it the proper way, because the improper way would be to insert the contents of the JSON file directly into code.
To prevent others from seeing the contents of the JSON file, simply set the respository as private.

How Can I Obtain GCP service account credentials on Google Cloud Run?

This page explains both:
Obtaining and providing service account credentials manually for developing local, deploying on-premises, or deploying to another public cloud.
Obtaining credentials on Compute Engine, Kubernetes Engine, App Engine flexible environment, and Cloud Functions
But there is no mention of obtaining credentials on Cloud Run. I'd appreciate it if you give instructions for obtaining credentials and setting firebase-admin initializeApp and firebase initializeApp for authentication on Cloud Run.
The documentation suggests that you can use the default service account just like other Google Cloud products as described here. The Firebase Admin SDK should use that account when initialized with no parameters.
There are also steps described if you want to use a non-default service account, which you can simply configure in the console or provide with gcloud.
If you must provide a file that's readable at runtime, you will have to deploy an image with that file added to the image. There is no short set of steps to add that file - you will have to make your docker build include it in a readable location, and your code will know where to look for it in order to load it.

How can I "admin.initializeApp();" no arguments in local

I am always grateful for your help.
I want to write code admin.initializeApp(); both locally and in production.
When I deploy functions to production with no auguments, it works.
But locally, it requires me to write it like below:
const serviceAccount = require("/home/yhirochick/development/ServiceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://xxxx.firebaseio.com/"
});
In the official documentation it says that configuration is applied automatically when you initialize the Firebase Admin SDK with no arguments
But when I execute the command firebase serve --only functions locally and some calls some requests by postman produce the error below:
[2019-07-22T06:45:26.227Z] #firebase/database: FIREBASE WARNING: Provided
authentication credentials for the app named "[DEFAULT]" are invalid. This
usually indicates your app was not initialized correctly. Make sure the
"credential" property provided to initializeApp() is authorized to access the
specified "databaseURL" and is from the correct project.
I want to know How can I "admin.initializeApp();" no arguments locally.
I have grappled with this also and I don't think the local testing scenario currently is explained very well in the official documentation. But here is a solution:
For your local environment you need to download the firebase projects firebase service account json file (found in firebase console under project settings -> service account) and set an environment variable GOOGLE_APPLICATION_CREDENTIALS to point to the file:
# Linux/MACOS version
export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_YOUR_SERVICE_ACCOUNT_FILE]"
Read more here, also on how to do this on Windows
Now you will be able to use admin.initializeApp() (with no arguments) locally.
A possible downside of this approach is that you have to set the environment variable each time you fire up a terminal before you start the firebase emulator, because the variable gets deleted when you end the session.
Automate it...
You could automate the export ... command by bundling it together with the command that fires up the emulator. You could do this by adding an entry to the scripts section of your package.json, e.g.:
"local": "export GOOGLE_APPLICATION_CREDENTIALS='[PATH_TO_YOUR_SERVICE_ACCOUNT_FILE]' && firebase emulators:start --only functions"
Then, in this example, you would only need to type npm run local.
Alternative: provide explicit credentials in local environment only
Look at this example: https://stackoverflow.com/a/47517466/1269280.
It basically use a runtime node environment variable to separate between local and production and then use the explicit way of providing credentials in the local environment only.
This is my preferred way of doing things, as I think it is more portable. It allows me to put the service account file inside my codebase and not deal with its absolute file path.
If you do something like this then remember to to exclude the service account file from your repo! (it contains sensitive info).
Background: difference between production and local service account discovery
The reason that admin.initializeApp() (with no arguments) works out-of-the-box in production is that when you deploy to production, i.e. Firebase Functions, the code ends up in a 'Google managed environment'. In Google managed environments like Cloud Functions, Cloud Run, App Engine.. etc, the admin SDK has access to your applications default service account (the one you downloaded above) and will use that when no credentials are specified.
This is part of Google Clouds Application Default Credentials (ADC) strategy which also applies to firebase functions.
Now, your local environment is not a 'google managed environment' so it doesn't have access to the default service account credentials. To google cloud, your local box is just an external server trying to access your private Firebase ressources. So you need to provide your service account credentials in one of the ways described above.
Before I knew this, I thought that because I was already logged in to firebase via my terminal i.e. firebase login and were able to deploy code to firebase, the firebase emulator would also have the necessary credentials for the firebase admin sdk, but this is not the case.

Error 403 because Google Cloud Vision client points to wrong project

I'm trying to work through the Google Cloud Vision Pyhon example but I'm getting an authentication error.
This is not my only Google Cloud project, and my GOOGLE_APPLICATION_CREDENTIALS environment variable is set to the path to my bigquery project. I thought I could override this by using this statement:
client = vision.ImageAnnotatorClient.from_service_account_json(key_path)
where key_path is the path of the json key file associated with my (Cloud Vision API-enabled) vision project. However, I'm getting the 403 error from this
response = client.label_detection(image=image)
Apparently, even though I specified the key file path for the ImageAnnotatorClient, it still looks at my bigquery project's credentials and spits the dummy because there is no vision API enabled for it.
Do I really have to change the environment variable every time I change the project?
It seems that the Cloud Vision project ID does not propagate to the Python environment from either the Cloud Console or the credentials file. I fixed the reference using the Cloud Console:
gcloud config set project my_vision_project
The label_detection call works now.

How do I give an Openstack server permissions to call the Openstack APIs?

I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.

Resources