Related
I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
š Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround Iād recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition
I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
š Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround Iād recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition
I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
š Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround Iād recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition
I' trying to understand what eu.artifacts.%PROJECT NAME%.appspot.com is. It's currently taking up 800mb of storage from my daily 5gb limit. It contains only application/octet-stream type of files. This bucket was created automatically and the file path is eu.artifacts....appspot.com/containers/images. 2 heaviest files there weight as much as 200mb and 130mb. I tried deleting it but it was automatically created again. Users can upload pictures on my website but that bucket currently only takes about 10mb containing all the user images.
So my question is: What is this bucket for and why does it weight so much?
firebaser here
If you are using Cloud Functions, the files you're seeing are related to a recent change in how the runtime (for Node 10 and up) is built.
Cloud Functions now uses Cloud Build to create the runtime (for Node 10 and up) for your Cloud Functions. And Cloud Build in turn uses Container Registry to store those runtimes, which stores them in a new Cloud Storage bucket under your project.
For more on this, also see this entry in the Firebase pricing FAQ on Why will I need a billing account to use Node.js 10 or later for Cloud Functions for Firebase?
Also see this thread on the firebase-talk mailing list about these artifacts.
š Update: some other answers suggest deleting artifacts from the Storage buckets, and even setting up lifecycle management on them to do so automatically. This leads to dangling references to those artifacts in the Container Registry, which breaks future builds.
To safely get rid of the artifacts, delete the container from the Container Registry console (it's under the gcf folder) or with a script. That will then in turn also delete the artifacts from your Storage bucket.
Since version 9.14 of the CLI, the firebase deploy process automatically cleans up its container images after a deploy. So if you upgrade to the latest version, you should no longer get additional artifacts in your storage buckets.
I've consulted GCP support and here are a few things
Cloud Functions caused the surge in storage usage
Since these artifacts are not stored in the default bucket, they'll charge you even if your total bytes stored are not reaching the free tier limit
Remove the artifact bucket at https://console.cloud.google.com/storage/browser. According to the support staff
Regarding the artifacts bucket, you can actually get rid of them, as they are storing previous versions of the function. However, I do not recommend deleting the "gcf-sources..." bucket(s) , as it contains the current image, so deleting this bucket would mess up your function.
I tried to remove it in whole, and so far it is not causing trouble. I'll update if it break things later.
Edit 201118: See comment below and you might need to keep the bucket while removing all the content in it.
Adding to #yo1995
I consulted with Firebase Support and they confirmed that the artifacts bucket should not be deleted. Basically the artifacts are used to help build the final image to be stored in the "gcf-sources" bucket.
To quote them directly
"you are free to delete the contents in "XX.artifacts", but please leave the bucket untouched, it will be used in the following deployment cycles."
There might be some unintended behaviour if you delete the artifacts bucket entirely.
Also "The team is working to clean up this bucket automatically, but there are some restrictions that they need to solve before publishing the solution."
For the time being I set the bucket to auto-delete files older than 1 day old.
Adding to #yo1995's response, you can delete the artifacts in the bucket without needing to go into GCP. Staying in Firebase, you go to Storage, then "Add a Bucket". From there, you will see the option to import the gcp and artifact buckets. Next, you can delete the artifacts in the buckets accordingly.
Per some comments received, it's important not to delete the bucket. Rather, delete the artifacts in the bucket only!
EDIT early 2022: This whole answer is now moot. It may have worked in the past, but the actual root cause of the problem is now fixed in the Firebase CLI.
How to reduce storage
So there is a great answer to the issue but the solution as to how to fix it requires further deep diving.
To help future developers cut right to the chase, here is the result you should see after adding the following rules to your project in GCP
The orange line is the us-artifacts.<your-project>.appspot.com bucket.
Steps to fix the issue
Navigate to https://console.cloud.google.com/
Open the GCP project that corresponds to the Firebase project
In the menu, choose Storage -> Browser
Click on the offending us-artifacts.<your-project>.appspot.com bucket
Go to the 'Lifecycle' tab and add a life span of 3 days
Add a rule
Delete Object
Age, 3 Days
NB: Results will not appear on the usage graph until about 24 hours later
Caveat
Firebase uses containers that back reference previous containers, so if you set a period of 3 days and your firebase deploy functions start failing, you will need to update the local name of your function to include versioning, and either specify a build flag to delete old versions, remove them from your firebase.json, or manually delete obsolete functions.
Using versioned API type functions
In your entrypoint, assuming index.ts, and assuming you have initilaised firebase with
admin.initializeApp(functions.config().firebase)
import * as functions from 'firebase-functions'
// define the app as a cloud function called APIv1 build xxxxxx
export const APIv1b20201202 = functions.https.onRequest(main)
where main is the name of your app
and in your firebase.json
...
"hosting": {
"public": "dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**", "**/tests/**"],
"rewrites": [
{
"source": "/api/v1/**",
"function": "APIv1b2021202"
}
]
},
...
Or, to Manually Update
# Deploy new function called APIv11
$ firebase deploy --only functions:APIv11
# Wait until deployment is done; now both APIv11 and APIv10 are running
# Delete APIv10
$ firebase functions:delete APIv10
Firebase said they have released a fix (as of June 2021):
https://github.com/firebase/firebase-tools/issues/3404#issuecomment-865270081
Fix is in the next version of firebase-tools, which should be coming today.
To fix:
Run npm i -g firebase-tools.
Browse your contains in Cloud Storage https://console.cloud.google.com/storage/browser/ (look for a bucket named gcf-sources-*****-us-central1)
Any deleted functions via firebase deploy --only functions seem to remove artifacts automatically, but if you delete them through the UI, they artifacts remain.
After some research and emailing with the firebase team, this is what was suggested to me.
We are aware that Cloud Build is not automatically deleting old artifacts so it's size keeps on increasing, as a workaround Iād recommend deleting the files inside the bucket in order to reduce any possible charges.
You can delete the files into the mentioned buckets going to the GCP console (use the same credentials as Firebase Console) -> Select the correct project -> From the left upper corner menu select Storage -> Browser.
You will see all the buckets that belong to your project, click on the bucket you prefer, and you can delete the files from there.
One other option that you may try is managing the bucket's object lifecycles. There is an option to delete objects when they meet all conditions specified in the lifecycle rule, here is a link with one example about this option. In this way, the bucket objects will be deleted automatically.
I have created a configuration file I named storage_artifacts_lifecycle.json with contents:
{
"lifecycle": {
"rule": [
{
"action": { "type": "Delete" },
"condition": {
"age": 21
}
}
]
}
}
I configure my storage lifecycle with the command:
gsutil lifecycle set ./firebase/storage_artifacts_lifecycle.json gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
and I validate its results after running with
gsutil lifecycle get gs://us.artifacts.${MY_PROJECT_ID}.appspot.com
Hope this helps some!
Adding to #d-_-b
As of 7th July 2022
It is now announced on firebase support page as well:
If you are seeing unexpected amounts of Cloud Storage used, this is
likely caused by a known issue with the cleanup of artifacts created
in the function deployment process.
This issue is now resolved; if you are still seeing unexpected usage,
update the Firebase CLI and re-deploy your Cloud Functions.
I did a bit of research on the topic and find the optimal solution for me - a script that I run before each deploy of my Firebase functions. The script scans my container images and:
Keeps the ones with latest tag.
Deletes all the images except the last too.
This approach is semi-automated. The storage anyway grows only when I deploy so it works really well for me.
The script is written in JavaScript for environment with node and gcloud cli available.
const spawn = require("child_process").spawn;
const KEEP_AT_LEAST = 2;
const CONTAINER_REGISTRIES = [
"gcr.io/<your project name>",
"eu.gcr.io/<your project name>/gcf/europe-west3"
];
async function go(registry) {
console.log(`> ${registry}`);
const images = await command(`gcloud`, [
"container",
"images",
"list",
`--repository=${registry}`,
"--format=json",
]);
for (let i = 0; i < images.length; i++) {
console.log(` ${images[i].name}`);
const image = images[i].name;
let tags = await command(`gcloud`, [
"container",
"images",
"list-tags",
image,
"--format=json",
]);
const totalImages = tags.length;
// do not touch `latest`
tags = tags.filter(({ tags }) => !tags.find((tag) => tag === "latest"));
// sorting by date
tags.sort((a, b) => {
const d1 = new Date(a.timestamp.datetime);
const d2 = new Date(b.timestamp.datetime);
return d2.getTime() - d1.getTime();
});
// keeping at least X number of images
tags = tags.filter((_, i) => i >= KEEP_AT_LEAST);
console.log(` For removal: ${tags.length}/${totalImages}`);
for (let j = 0; j < tags.length; j++) {
console.log(
` Deleting: ${formatImageTimestamp(tags[j])} | ${tags[j].digest}`
);
await command("gcloud", [
"container",
"images",
"delete",
`${image}#${tags[j].digest}`,
"--format=json",
"--quiet",
"--force-delete-tags",
]);
}
}
}
function command(cmd, args) {
return new Promise((done, reject) => {
const ps = spawn(cmd, args);
let result = "";
ps.stdout.on("data", (data) => {
result += data;
});
ps.stderr.on("data", (data) => {
result += data;
});
ps.on("close", (code) => {
if (code !== 0) {
console.log(`process exited with code ${code}`);
}
try {
done(JSON.parse(result));
} catch (err) {
done(result);
}
});
});
}
function formatImageTimestamp(image) {
const { year, month, day, hour, minute } = image.timestamp;
return `${year}-${month}-${day} ${hour}:${minute}`;
}
(async function () {
for (let i = 0; i < CONTAINER_REGISTRIES.length; i++) {
await go(CONTAINER_REGISTRIES[i]);
}
})();
It runs the following commands:
# finding images
gcloud container images list --repository=<your repository>
# getting metadata
gcloud container images list-tags <image name>
# deleting images
gcloud container images delete <image name>#<digest> --quiet --force-delete-tags
A blog post describing my findings is available here https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
As an alternative, You can create a life Cycle rule to delete the objects inside the folder.
set the age as 1 day. So it will delete all objects in the folder which is more than 1 day aging.
lifeCycle rulw
SetCondition
I am experiencing a strange behavior with my Firestore account, on the console I select a collection then I click Delete all documents an it indicates that all have been deleted successfully. When I refresh the data, the collection appear with all the deleted data. I have no service doing this anywhere and wondering what may cause this. Is there a solution?
Alongside this, any change I do to the document fields on the console are successful but are lost after refreshing.
I experienced similar behaviour today that I haven't seen before. I deleted documents from the Firestore console but my app was still fetching them successfully. Now, about an hour after witnessing that behaviour everything is back to normal and my console deletes are immediately seen on the device.
I'm thinking it was a glitch in Firestore - after all it's still in Beta.
I tried all possible means, I had to back up the whole Firestore DB in Json files then deleted the project from console and created a new one. I think It's an issue with Firestore since I created the project before the launch of Firestore and may have required to create a new one.
This happened to me today (7/1/18). After completely logging out of firebase and then logging back in, I was able to delete documents and have them be permanently removed.
It happens because your app is using the cache memory instead of the actual data. You'll have to disable the cache and re-enable the network.
For iOS:
// disable cache
let settings = FirestoreSettings()
settings.isPersistenceEnabled = false
let db = Firestore.firestore()
db.settings = settings
// call your queries inside this layer
Firestore.firestore().enableNetwork { (error) in
// Do online things
}
For Android/Java:
// disable cache
FirebaseFirestoreSettings settings = new FirebaseFirestoreSettings.Builder()
.setPersistenceEnabled(false)
.build();
db.setFirestoreSettings(settings);
// enable network
db.enableNetwork()
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
// Do online things
// ...
}
});
You can read more at https://firebase.google.com/docs/firestore/manage-data/enable-offline