This question already has answers here:
FirebaseStorage: How to Delete Directory
(15 answers)
Closed 5 years ago.
I have a folder in firebase storage where I upload users images to, however I can't delete this folder.
Storage.storage().reference().child("folder").delete();
I get error code 404, message: Not found. Could not delete object.
EDIT:
You can use the new List API to list files in storage with some common prefix. The prefix is effectively the the path where the objects live. You will have to them iterate each object you get from the API, and delete each one individually. Also read this blog post about the API.
ORIGINAL ANSWER:
There is currently no way to programmatically delete an entire folder in Cloud Storage with the Firebase SDK. It turns out, with Cloud Storage, there is not even any "folders" at all. A storage bucket just a collection of objects that have names that look like file paths. It is not a real "filesystem" in this respect.
If you want to delete all the files under a certain path, you will have to find all their names and remove them individually. Typically, applications will store the paths of known objects in Realtime Database for this reason.
If you want to delete all objects under a path from the command line with gsutil, read the docs for "gsutil rm".
Related
I'm using gcs bucket for wordpress (wp-stateless plugin)
after create and upload media file to a bucket. I copy it to other bucket (duplicate). But generation number of each object has been change (maybe random).
My question is: How to keep generation number same bucket source like in destination bucket?
Thanks in advance.
Basically, there’s not an official way of keeping the same version and generation numbers when copying files from one bucket to another. This is WAI and intuitive because the version number refers to this object (which resides on this bucket), when you copy it to another bucket, it's not the same object (it's a copy) so it cannot keep the same version number.
I could think of a workaround, keeping somewhere your own version of the objects and then through the API make an organized copy. This would mean you would be dumping the bucket but you would need to have a list of all the objects and its versions and then add them in sequential order (sounds like a lot of work). You could keep your own versioning (or the same versioning) in the metadata of each object.
I would recommend that if your application depends on the object’s versioning, to use custom metadata. Basically, if you did your own versioning using custom metadata, when copying the objects to a new bucket, it would keep the same metadata.
There is already a feature request created about this. But, it has mentioned that it's currently infeasible.
However, you can raise a new feature request here
still new to flutter and firebase. I understand how to store and retrieve images and display it on the app.
But how do I go about synchronizing files from the app's local asset folder to an asset folder stored in firebase storage? My intention is to check the cloud folder if a new image like an icon is recently uploaded, and download it to the app's local folder. If a file is removed in the cloud storage, it should also remove it from the local assets folder, mirroring it.
I need a way to compare local AssetManifest.json to the one on firebase storage. I Just need a little direction/algorithm to start with. Thanks for the help!
There is nothing specific built into Cloud Storage's Firebase SDK for this, so you'll have to build it in your own application code.
Using only the Cloud Storage for Firebase API
If you're just using Cloud Storage, you'll have to:
List all files that you're interested in from Storage.
Loop over the files.
Get the metadata for each file and check then the files was last updated
Compare that to when you wrote the local file.
Download the file if it is new or modified.
This approach will work, but it requires quite some calls to the Storage API, because there's no specific API to give you files that were modified since a specific date/time.
Storing metadata in a cloud database
You could also consider storing the metadata in a cloud database, like Firebase's Realtime Database or Cloud Firestore, and then use the query capabilities of that database to retrieve only files that were modified since your device last synchronized.
The recipe then becomes:
Determine when we last synchronized, which is a value you'll want to store in local storage/shared preferences.
Execute a query to the database to determine which files were added/modified since then.
Loop over the query results and...
Download each file that was modified.
In here, only step 2 and 4 make calls to the external APIs, so it is likely to be faster and cheaper to execute (but more work for you to write initially).
I am trying to move files into separate folders in Firebase Storage once they have been uploaded. As it turns out, you can not achieve this with the the JavaScript Web Client SDK for Storage. However, it appears that you could do so with the Admin SDK for Storage using Firebase Functions. So that is what I am trying to do. I understand that you need to first download a file into your Firebase Functions and then re-upload into a new folder in Storage.
To download a file, I need to pass its reference from the client and here is where it gets confusing to me. I am currently getting all the uploaded files in the client via the listAll() function which returns items and prefixes. I am wondering whether or not I can use either the items or the prefixes to then download the files into Firebase Functions using them (items or prefixes). Alternatively, I can pass the URLs. However, the question is, which method do I use to get and download them in Functions afterwards?
I know of admin.storage.object as explained in https://firebase.google.com/docs/storage/extend-with-functions#trigger_a_function_on_changes. However, does it handle multiples files? In other words, the object, as I understand, is one file that is uploaded to Storage and you can use its attributes such as object.bucket or object.name to access more information. However, what if there are multiple files uploaded at the same time, does it handle them one by one? Also, if I am passing the references or URLs of the files that need to be downloaded from the client, is admin.storage.object the right choice? Because it seems to simply process all the files uploaded to Storage, instead of getting any references from the client.
Further, there is a description of how to download a file (https://firebase.google.com/docs/storage/extend-with-functions#example_image_transformation) which is this code: await bucket.file(filePath).download({destination: tempFilePath});
I understand that the filepath is basically the name of the file that is already in Storage (ex. /someimage). But what if there are other files with the same name? Might the wrong file be downloaded? And how do I make sure that the filepath is the file that I passed from the client?
Let me know what your thoughts are and whether or not I am heading in the right direction. If you include a code in your answer, please write it in JavaScript for the Web. Thank you.
Thank you!
Here are some points that could help:
In GCP Storage technically there are no folders, GCS emulates the directory structure by using / in the names of objects.
When setting a cloud function triggered by a GCS object change, each object change is an event, each event triggers an invocation of the function (you might have an bucket for unprocessed files which triggers the function and have them move to a different bucket when proccesed)
You might consider using the REST API to move/copy/rename the objects without having to download them
As a side note the question is a little too broad, possibly these points could help clarify things for you.
Have an Azure Storage Account with a File Share for backup files. Want to keep 5 days of files in that file share and automatically delete anything older than 5 days.
Trying to use a Logic App to perform this task but using the value LastModified doesn't pull the LastModified date off of the file. I just get Null.
Yes, that is a known problem and is mentioned here. Instead, you can get the "LastModified" form "Get File Metadata" for each "value" after you get "List Files".
I have a long list of files in Firebase Storage, which I have uploaded from a python script.
Many of those files have this kind of names:
foo_8346gr.msb
foo_8333ys.msb
foo_134as.mbb
...
I know there is no programmatic way to delete a folder in Storage (they are not even folders), but how could I remove all files starting with "foo_" programmatically, from python?
You can use Cloud Storage List API to find all files with a certain prefix, then delete them. That page has code samples for a variety of languages, including Python. Here's how you list files with a prefix:
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blobs = bucket.list_blobs(prefix=prefix, delimiter=delimiter)
print('Blobs:')
for blob in blobs:
print(blob.name)
if delimiter:
print('Prefixes:')
for prefix in blobs.prefixes:
print(prefix)
You will have to add the bit of code that deletes the file if you believe it should be deleted. The documentation goes into more detail about the List API.
Firebase provides a wrapper around Cloud Storage that allows you to directly access the files in storage from the client, and that secures access to those files. Firebase does not provide a Python SDK for accessing these files, but since it is built around Google Cloud Storage, you can use the GCP SDK for Python to do so.
There is no API to do a wildcard delete in there, but you can simply list all files with a specific prefix, and then delete them one by one. For an example of this, see the answer here: How to delete GCS folder from Python?