Deploying in functions in multiple regions with a unified codebase - firebase

I have a fairly simple requirement in that I need an identical replica of my Firebase functions, bucket and firestore database in multiple regions to satisfy that data does not move between regions. One for EU, GB, US etc..
Because you can only have one firestore database per firebase project I'm creating a new project per region as recommended. I can redirect my bucket read/writes to the correct region using the firebase project environment variables to define which bucket to write to. So far so good.
Now the last bottle neck is the functions themselves.
The problem is by default functions go to "us-central1" rather than the firebase project region so it seems the only way to specify region is using the .region("eu-west3") specifier in the codebase etc. But because I want a single unified code base across all projects changing this on a per project basis is a bit cumbersome.
Any suggestions on how best to manage this?

You can store the Region values as environment configs using firebase functions:config:set:
firebase functions:config:set env.region="us-central1"
After running functions:config:set, you must redeploy functions to make the new configuration available.
Then get the environment variable inside your function by using this code:
exports.myFunction = functions
.region(functions.config.env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});
With this, you no longer need to change region inside your code. But you still have to change configs and redeploy functions on every project if you want to change regions.
An additional solution is to use Cloud Build to automate everything. I haven't fully tested it yet, but here's what I can come up with.
First, follow the instructions on how to use the Firebase builder tool. You need this community provided image to run Firebase CLI commands on Cloud Build. Once finished, make sure that you have the following API's enabled on your projects:
Cloud Resource Manager API
Firebase Management API
and on your Cloud Build settings, Firebase Admin is enabled.
Then try this cloudbuild.yaml file. Cloud Build will use the tool from your project and to your other projects:
steps:
# Setup First Project
- name: gcr.io/project-id1/firebase
id: 'Use Project 1'
args: ['use', 'project-id1']
- name: gcr.io/project-id1/firebase
id: 'Set Firebase Environment Config'
args: ['functions:config:set', 'env.region=us-central1']
- name: gcr.io/project-id1/firebase
id: 'Deploy function1'
args: ['deploy', '--project=project-id1', '--only=functions']
# Setup Second Project
- name: gcr.io/project-id1/firebase
id: 'Use Project 2'
args: ['use', 'project-id2']
- name: gcr.io/project-id1/firebase
id: 'Set Firebase Environment Config'
args: ['functions:config:set', 'env.region=eu-west3']
- name: gcr.io/project-id1/firebase
id: 'Deploy function2'
args: ['deploy', '--project=project-id2', '--only=functions']
# And so on...
Note: Change project-id with your actual Project ID.

exports.myFunction = functions
.region(functions.config.env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});
config is a function so this adjustement was necessary:
exports.myFunction = functions
.region(functions.config().env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});

Related

How to structure a multi-region firebase app?

Currently, I have this structure:
- A couple of functions in a single region
- A firestore database in a single region
A while ago, the whole region went down, which caused issues to the users.
I want to change the structure to be in multiple regions, both the database and the functions.
How is that done in the firebase world?
Keeping in mind that when I deploy the functions in multiple regions, there should be some way to route requests to the function in the region close to the user.
Once the database is created, the region can’t be changed. You'll have to create a new project with the new region.
By default, functions run in the us-central1 region.If you have a function that is currently in the default functions region of us-central1, and you want to migrate it to asia-northeast1, you need to first modify your source code to rename the function and revise the region. You can refer to this document
// before
const functions = require('firebase-functions');
exports.webhook = functions
.https.onRequest((req, res) => {
res.send("Hello");
});
// after
const functions = require('firebase-functions');
exports.webhookAsia = functions
.region('asia-northeast1')
.https.onRequest((req, res) => {
res.send("Hello");
});
Then deploy by running:
firebase deploy --only functions:newfunction
After renaming the function with the new region, now there are two identical functions running in us-central1 and asia-northeast1.
You can also check this stackoverflow link1 & link2 which might help

Firebase still charge even though my Cloud Storage is empty [duplicate]

I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
πŸ›‘ Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround I’d recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition

Firebase storage artifacts

I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
πŸ›‘ Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround I’d recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition

Error: Could not load the default credentials (Firebase function to firestore)

I am attempting to write an onCall function for Firebase Cloud Functions that performs advanced querying tasks on a firestore database (i.e. checking a text query up against AutoML natural lang to get a category, etc) but I keep running into a problem trying to query the database from the function:
Error getting documents :: Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/srv/node_modules/google-auth-library/build/src/auth/googleauth.js:161:19)
at <anonymous>
at process._tickDomainCallback (internal/process/next_tick.js:229:7)
Function:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
exports.query = functions.https.onCall((data, context) => {
const text = data.text;
var results = [];
const promise = db.collection('providers').get()
promise.then((snapshot) => {
console.log('marker');
snapshot.forEach((doc) => {
results.push({id: doc.id, data: doc.data()});
});
console.log('yessir');
return {results: results};
}).catch((err) => {
console.log('Error getting documents :: ', err)
console.log('nosir');
return {results: "no results"};
});
});
Longer output:
Function execution started
Function execution took 8ms, finished with status code: 200
Error getting documents :: (etc, same error)
nosir
Example 2 (no change in running):
Function execution started
Function execution took 1200 ms, finished with status code: 200
marker
yessir
I can't figure out where this problem is coming from or how to resolve it.
Any help?
Regards.
What I first did to solve it was add my firebase admin sdk key to my project.
I downloaded it at
https://console.firebase.google.com/u/0/project/**YOUR_PROJECT_ID**/settings/serviceaccounts/adminsdk
then at admin.initializeApp(); I changed to:
admin.initializeApp({
credential: admin.credential.cert(require('../keys/admin.json'))
});
My folder structure is
β”œβ”€β”€ key
β”‚ β”œβ”€β”€ admin.json
β”œβ”€β”€ src
β”‚ β”œβ”€β”€ index.ts
HOWEVER, a better practice and safer approach, as some mentioned already:
You could use environment variables to store your credentials, this way you won't commit it to a repository such as Github, keep it safer from safety breaches and wonΒ΄t make it hardcoded.
Depending on your project and where you'll deploy it there's a different way to do it.
There are many tutorials around on how to create and access env variables (like this one), but you could use a name it like the example below:
GOOGLE_APPLICATION_CREDENTIALS="/home/admin.json"
I had the same error "Could not load the default credentials".
The error occured after updating my project dependencies with npm update.
More precisely firebase-admin and firebase-functions.
Before update:
"dependencies": {
"#google-cloud/firestore": "^1.3.0",
"firebase-admin": "~7.0.0",
"firebase-functions": "^2.2.0"
}
After update:
"dependencies": {
"#google-cloud/firestore": "^1.3.0",
"firebase-admin": "^8.6.0",
"firebase-functions": "^3.3.0"
}
I added the serviceAccountKey.json to my project and changed the imports with the code provided at the service account setting of my firebase project.
From :
var admin = require('firebase-admin')
admin.initializeApp()
To:
var admin = require('firebase-admin');
var serviceAccount = require('path/to/serviceAccountKey.json');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: 'https://my-project.firebaseio.com'
});
See #Fernando Rocha's answer below to access the account setting of your firebase project.
#aldobaie's answer helped me figure out what was going on for my use case. For those who are not looking to add async/await to all their calls, remember that the firestore calls return promises, so prepending them with return has the same effect.
In my case:
function doSomething(...) {
return admin.firestore().collection(...).doc(...).get()
.then((doc) => {...})
.catch(err => {...})
}
module.exports = functions.firestore.document('collection/{docId}').onWrite((change, context) => {
return doSomething()
})
I think the accepted answer goes against Firebase's recommend configuration. The function environment has access to admin credentials already, and passing your key in the code is not recommended.
I do it like this:
const functions = require('firebase-functions')
const admin = require('firebase-admin')
admin.initializeApp(functions.config().firebase)
I ran into the same problem myself. Sometimes the function works and many times it would through the Error: Could not load the default credentials error.
The problem I believe have been solved by watching for the Callbacks. You have to keep the function running until the callbacks have been called using the await and async prefixes.
Firebase Cloud Functions don't allow the access to the processor through callbacks once it's been terminated! That's why we get the Error: Could not load the default credentials error.
So, whenever you have a .then() function prefix it with await and prefix the function it's inside it with async and prefix any call to the function with await.
async function registerUser(..) {
...
await admin.firestore().collection(..)...
...
}
I hope this helps you out!
Another option is to set the service account key in an environmental variable instead of setting it with a call to firebaseAdmin.initializeApp({ credential }).
Linux
export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
Windows PowerShell
$env:GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\[FILE_NAME].json"
Postscript: An even better option might be to use the local emulator suite.
Alright, so I had this error as well and spent a frustrated few days going over multiple sites, articles, videos, etc to try and figure out the cause of this problem so I could get an adequate answer for both myself and everyone else who is struggling.
There are answers to this question in this thread already. However, I tried following most of them to no avail. Some have security issues and others are just too vague to follow. I decided to post a thorough answer which also addresses the security issues you would have if you followed some of the other answers.
Alright now that I've gotten that out of the way lets get started!
First of all your going to need to go to this link - Getting started with authentication
You should see this in the center of your screen -
Next, click on the button I've marked in green. This will bring you to the create service account key page.
You should see a similar screen to the below image -
For the Service Account option, select new service account.
Create a name for your service account. This is not important, name it whatever you like.
For the role option select Project -> Owner
Finally, select JSON option for key type and then hit create.
This should create and download a .json file. Place this file somewhere smart and safe. I created a folder called 'credentials' in the root of my project and placed it in there.
Also I renamed the file to something more readable. While this isn't necessary, following good file/folder naming and structuring practices is important and I would advise you to rename it to something more readable.
(Its important to note that this file is personal and should not be included in any github repositories/firebase production/etc. This file is for you and you alone!)
Next open a command prompt window and type in the following command -
set GOOGLE_APPLICATION_CREDENTIALS=C:\Users\Username\Path\To\File\filename.json
This will create an environment variable that is linked securely to your credentials which firebase will recognize and use when you make calls to authenticate yourself.
(Note - This is the command for windows. If your using mac/linux go to the 'Getting started with Authentication' page mentioned earlier to get the appropriate command for your operating system)
There you go, the issue should now be fixed. If anyone has any further questions or problems feel free to comment below and i'll do my very best to help you. I know how frustrating it can be to be stuck with an error like this.
I hope this helps someone at the very least. Happy Programming.
C.Gadd
I do not want to use #Fernando solution even though there is nothing wrong.
I have prd and non-prd environment. I use firebase use command to push the changes to the correct environment. When I deploy, firebase uses the default service account. Also I do not want to have the keys in the project folder or in my git repo.
The way I solved might not work for others, but want to share here.
The issue came to me when I updated the permission of the firebase project to give a viewer with editor permission. I made that person the owner and rolled back to editor. It went away. It is not justifying as a fix, but worked for me and I do not have to download the key.
Instead of setting serviceAccountKey.json file, you can first set .env values from it and then use those:
import * as firebaseAdmin from "firebase-admin";
const adminCredentials = {
credential: firebaseAdmin.credential.cert({
projectId: process.env.NEXT_PUBLIC_FIREBASE_PROJECT_ID,
clientEmail: process.env.FIREBASE_CLIENT_EMAIL,
privateKey: JSON.parse(process.env.FIREBASE_PRIVATE_KEY || ""),
}),
databaseURL: process.env.NEXT_PUBLIC_FIREBASE_DATABASE_URL,
};
if (!firebaseAdmin.apps.length) {
firebaseAdmin.initializeApp(adminCredentials);
}
const firestore = firebaseAdmin.firestore();
Old answer:
This is a known bug in Firebase. see the progress here: https://github.com/firebase/firebase-tools/issues/1940
However, meantime there are few options to resolve this:
1 Explicitly passed via code
var admin = require("firebase-admin");
var serviceAccount = require("path/to/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://your-app.firebaseio.com"
});
Not recommended this hard-coding. This json file will not be accessible on server.
2 Passed via GOOGLE_APPLICATION_CREDENTIALS
I'd recommend this way, set environmental variable:
GOOGLE_APPLICATION_CREDENTIALS=path/to/serviceAccountKey.json
For windows: (considering json is at your root path of project.
using powershell:
$env:GOOGLE_APPLICATION_CREDENTIALS='serviceAccountKey.json'
using NPM script: (notice no space before &&)
"serve": "set GOOGLE_APPLICATION_CREDENTIALS=serviceAccountKey.json&& npm run start",
(for some reason cross-env didn't work)
3 Available at a well-known filesystem path due to gcloud
by installing gcloud sdk and running gcloud auth application-default login
4 Available from the Compute Engine metadata API when running on GCP
I had same problem in firebase Error: "Could not get default credentials."
Then go to firebase console and go to project setting, where you can find Service Accounts option. Click there and you will see the Generate new private key under your project setting.
Copy code for your project language and add it to your project file.
var admin = require("firebase-admin");
var serviceAccount = require("path/to/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://your-database-url-that-is-given-under-admin-sdk-snippets"
});
After Generating the key you will have option to download. and put it in the project folder. Also set path var serviceAccount = require("path/to/serviceAccountKey.json");
That's it your are ready.
None of above.
You may just:
firebase login - It will open browser login
As soon as you do login, returnto console and run:
firebase init - It will run as successfull.
I had the same issue.
Go on your settings page on Firebase => Service and Account.
Firebase Setting 1. Parameters 2. Account 3. Download the file and rename it [admin.json]
Copy the code and paste it
Requires 'admin.json' and paste, and run Firebase deploy.
admin.initializeApp(functions.config().firebase);
also works.
This error can also occur when the cloud function is not terminated properly.
Whenever you write a cloud function make sure you return promise after the cloud function processing is over, so that cloud function knows that your process is complete.
If you don't return promise then there might be chances where your cloud function might terminate before the processing is complete.
You can refer this to know how to terminate the cloud function.
Terminate cloud functions
Download your firebase service account into your project and reference it like this:
<code>
var admin = require("firebase-admin");
var serviceAccount = require("path/to/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "<database-url>"
});
</code>
For those who come here from a serp trying to figure out why their google cloud function fails with:
Error: Could not load the default credentials. Browse to
https://cloud.google.com/docs/authentication/getting-started for more
information. at GoogleAuth.getApplicationDefaultAsync
but none of the above helped, you can try to update all(?) of your #google/whatever dependencies:
npm i -E #google/firestore#latest. Then rebuild, deploy, try again. It happened to me a few times recently and this worked.
I just had the same problem. To solve it, just update your node packages by npm update inside your project-dir/functions/ directory. Finally, deploy again.
On MacOS I had to do the following:
export GOOGLE_APPLICATION_CREDENTIALS=/Users/myname/serviceAccountKey.json
I was getting credential error because the locally running functions emulator could not securely talk to firebase auth running in production.
Google Cloud Reference
For those who still get the same problem event after downloading account key and using it inside your code, make sure it is inside your functions folder.
One thing it's a bit difficult to find in the docs is the firebase-admin SDK only uses the emulators when environment variables tell it to. If you use the service account JSON key as described in some answers here, firebase-admin will talk to prod (on Google Cloud) rather than the emulated version, even if everything else you're doing is on the emulators.
Since most likely you would rather use the emulators for local testing, here's how I set my environment variables in Mac ~/.zshrc:
export GCLOUD_PROJECT="your-project-id"
export FIRESTORE_EMULATOR_HOST=localhost:8080
export FIREBASE_AUTH_EMULATOR_HOST=localhost:9099
export FIREBASE_DATABASE_EMULATOR_HOST=localhost:9000
The GCLOUD_PROJECT id could be your project id, but apparently any id will work as long as it is a well-formed Firebase project id, so these same environment variables can be used to test all your projects on the Firebase emulators. Try setting these environment variables first for emulator use before you try any of the other solutions.
Another oddity is firebase emulators:start needs these environment variables set, but firebase emulators:exec sets them automagically. When you are in a CI scenario :exec is the better choice, but when actively running tests as you write code having the emulators stay up and running with :start is a faster loop and you'll need the environment variables for it to work properly. By having these in environment variables, your code won't need to change at all when deployed to the Cloud.
I just had this issue and fixed it with
firebase login

Transfer Firebase Storage Bucket between projects

I am attempting to copy the contents of a folder in one Firebase project's Storage bucket to the storage bucket of another Firebase project.
I have been following the firestore docs and this SO question.
Both projects have the necessary permissions to other's service accounts.
Here is what I have done:
When attempting to transfer files from a folder in the default bucket of Project-A to the default bucket of Project-B using the cloud shell terminal, I first set the project to 'Project-A'. I then ran gcloud beta firestore export gs://[PROJECT_A_ID] --collection-ids=[FOLDER_TO_TRANSFER] --async. This succeeds and creates a folder called "2019-08-26T21:23:23_26014/" in Project-A. This folder contains some metadata.
Next, I tried beginning the import by setting the project to Project-B and running gcloud beta firestore import gs://[PROJECT_A_ID]/2019-08-26T21:23:23_26014
This completes and the logs display this message:
done: true
metadata:
'#type': type.googleapis.com/google.firestore.admin.v1beta1.ImportDocumentsMetadata
collectionIds:
- [FOLDER_TO_TRANSFER]
endTime: '2019-08-26T21:25:56.794588Z'
inputUriPrefix: gs://[PROJECT_A_ID]/2019-08-26T21:23:23_26014
operationState: SUCCESSFUL
startTime: '2019-08-26T21:25:19.689430Z'
name: projects/[PROJECT_B]/databases/(default)/operations/[SOME_ID_STRING]
response:
'#type': type.googleapis.com/google.protobuf.Empty
However, the Project-B storage bucket doesn't have any new files or folders. It looks like the import did nothing. Am I missing something?
You can create a transfer job in the GCP Console. You can specify source/destination buckets from different projects as long as you have access permissions. You can specify the folder by setting "Specify file filters":
https://console.cloud.google.com/storage/transfer
You can also use the gsutil tool, which is part of gcloud, to move or copy your objects to another bucket.
So your default buckets would be gs://[PROJECT_A_ID].appspot.com and gs://[PROJECT_B_ID].appspot.com Let's say you wanted to copy over the contents of my_directory:
gsutil cp -r gs://[PROJECT_A_ID].appspot.com/my_directory gs://[PROJECT_B_ID].appspot.com

Resources