I have an images directory in Firebase Storage and I am trying to download all the files in that directory from the console. It gives me the option to select all files and a download button appears but when I click it only 1 image is downloaded.
Is there a way to download all the images via the Firebase Console?
You can use the gsutil tool to download all files from your Firebase Storage bucket. Use the cp command:
gsutil -m cp -r gs://{{bucket_url}} {{local_path_to_save_downloads}}
-m performs a multi-threaded download. Use this if you want to download a large number of files to perform parallel downloads
-r to copy an entire directory tree
Just had the same issue. Turns out it's a known issue.
Here's the response from Firebase support:
This is a known bug when trying to download directly on the Storage
Console. Our engineers are working on getting it fixed, however I
can't provide any timelines right now. We apologize for the
inconvenience this may have caused you. For the time being, you may
download the file by right-clicking the image previewed, then choose
"Save image as...".
Related
I am currently working on an personal application where, I upload documents/pictures/videos from my phone to Cloud Storage. During this time my computer sitting at home is consistently running a shell script waiting for a new document to be uploaded to Cloud Storage, after it finds an uploaded file, it downloads it does some work to it, and then deletes it.
I can figure out how to upload and connect my application to Firebase, but I am not sure if its possible for a shell script to do the remaining work.
Should I look into some other service to do this, or another method?
thank you for your help!
You can use a command line program called gsutil to upload and download files from a Cloud Storage bucket. This should be easy to use from a shell script.
I am using Google Colaboratory & github.
I create a new Google Colab notebook, and I clone my github project into it using a simple !git clone <github_link> in the notebook.
Now, I have a Jupyter notebook in my github project that I need to run on Google Colab. How do I do that?
There is not a real need of downloading the notebook. If you already have your Notebook in a GitHub repo, the only thing you need to do is:
Open your Notebook file on GitHub in any browser (So the URL ends in .ipynb).
Change the URL from https://github/full_path_to_your_notebook to https://colab.research.google.com/github/full_path_to_your_notebook
And that should work.
You can upload the notebook to google drive first, then open it from there.
go to drive.google.com
go into directory “Colab Notebooks”
choose “New” > File upload
After uploading, click the new file
Chose “Open with Colaboratory” at the top
The two most practical ways are both through the Google Drive webinterface.
The first method is what #Korakot Choavavanich described.
The advantage of this method is that it provides a Search window to search for your file in your google drive storage.
The second method is even more convenient - and maybe more appropriate for your case:
In the Google Drive webinterface, you navigate to your folder where your file is located - in your case within the cloned github repository.
Then (see screenshot):
right-click on the file | Open with | Colaboratory
Your file is then converted into a colabo notebook automatically (it takes at least half a minute for that).
The advantage with this method is that you can create the colabo file directly in the folder.
My tip is to create a copy of the original jupyter file (I added "COLABO" in the file name) as you will have different code to sync your google drive and save files than in a local jupyter notebook.
One of the way could be that you can connect your google drive with the Colaboraty notebook using the following link:
Link to images within google drive from a colab notebook
Post which you can download your github repo in your google drive location. Then browse through your google drive and open the notebook using Colaboratory itself.
import sys, os
sys.path.append('models/research')
sys.path.append('models/research/object_detection')
It helped me. I was also looking for it, and found it in this COLAB work
https://colab.research.google.com/drive/1EQ3Lt_ez-oKTtVMebh6Tm3XSyPPOHAf3#scrollTo=oC-_mxCxCNP6
The better option I have found is copying the code from each cell and executing the code in colab, if you clone the Github and containing ipynb file in that. By doing this you won't face any difficulties.
Upload the .ipynb file directly in colab. Just go to colab, in the tabs above there should be upload. choose the file and upload there.
It may be a new feature not mentioned in other answers.
But right now Colab allows running jupyter notebooks directly from github, even from private repos.
Login to your google account
Access colab.research.google.com
Select the GitHub tab.
Choose include private repository if needed.
Go through the authentication process in the new opened window
Select from your repos and notebooks
And clone your repo from inside the opened notebook.
Can someone please advise how to setup a backup for Files in Firebase storage. I am able to make a backup of Database but not sure how to setup a regular backup for files (I have images) in firebase storage.
How to make local backup of Firebase Storage
There is no built-in method via Firebase. However, since Firebase uses Google Cloud Storage behind the scenes for Firebase Storage it's possible to use the gutils Tool.
Prerequisites
Make sure Python (2.7.9+) is installed on your machine python -V
Go to the Google Cloud SDK page and follow the directions to download and install Google Cloud SKD on your OS.
Steps
At the end of the Google SDK installation you should have run gcloud init. This will ask you to select your project and authenticate you. Since Firebase uses Google Cloud Platform behind the scenes your Firebase project should be available as a choice.
In order for Google Cloud Utils to download the files that were uploaded with Firebase permissions you need to give your account Firebase Privileges. Go to the IAM page and select your email address you signed into cloud init with. In the list of available permissions you need to select Firebase Rules System from the Other category.
Get your Google Storage URL from the Firebase Storage Page in the dashboard (Towards the top) Should look something like this: gs://<bucket_name>
In command line on your local machine navigate to the folder you want to do a local backup to. Make sure you are in the folder you want as the following command will download all files right there in current folder
Run the gutil command gsutil -m cp -R gs://<bucket_name> .
-m enables multithreading for faster downloads if you have many files.
cp is the copy command
-R is recursive. If enabled it will download all files and folders in the specified tree.
You're done! This will run for some time depending on the size of your storage.
This can be used to also make a copy(backup) to another Google Cloud Storage Bucket or AWS etc.
Use Google Cloud Transfer Service.
Select your current project
Create Transfer Job
Select source (storage bucket url)
Select destination (click browse and create new bucket)
Use created bucket URL as destination
Configure transfer settings (This is where you can schedule how often the backup runs.)
Click "Create"
If you follow the wizard in the link it will guide you through pretty easily.
There is no built-in backup feature in Cloud Storage for Firebase.
But since it is built on top of Google Cloud Storage, any backup solution for GCS can work for Firebase too. Typically this will involve creating a separate bucket that is the target of the regular bucket where you store/read files.
is it possible to compress file or directory of google cloud storage without download it first and re-upload?
I think I need some tools similar like http://googlegenomics.readthedocs.org/en/latest/use_cases/compress_or_decompress_many_files/
thank you.
No. There is no way to ask GCS to directly compress or decompress objects entirely within GCS. You can certainly copy them elsewhere in the cloud (GCE, for instance) and operate on them there, or you could download an uncompressed object as a compressed object simply by using the Accept-Encoding: gzip header, but, again, not without taking it out of GCS in some fashion.
From
https://console.cloud.google.com/storage/browser/your-bucket
open a cloud shell (top right of the page), then:
gsutil cp -r gs://your-bucket .
zip your-bucket.zip ./your-bucket
gsutil cp your-bucket.zip gs://your-bucket
Strictly it is not happening in place within your storage but within the cloud.
I have created a Rackspace account earlier today for CDN to serve my Opencart images from Rackspace.
I have created a container where i will upload over 500,000 images, but prefer to upload them as a compressed file, feels more flexible.
If i upload all the images in a compressed file how do i extract the file when it is in the container? and what compression type files would work?
The answer may depend on how you are attempting to upload your file/files. Since this was not specified, I will answer your question using the CLI from a *nix environment.
Answer to your question (using curl)
Using curl, you can upload a compressed file and have it extracted using the extract-archive feature.
$ tar cf archive.tar directory_to_be_archived
$ curl -i -XPUT -H'x-auth-token: AUTH_TOKEN' https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaa-aaa-aaa-aaa?extract-archive=tar -T ./archive.tar
You can find the documentation for this feature here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/Extract_Archive-d1e2338.html
Recommended solution (using Swiftly)
Uploading and extracting that many objects using the above method might take a long time to complete. Additionally if there is a network interruption during that time, you will have to start over from the beginning.
I would recommend instead using a tool like Swiftly, which will allow you to concurrently upload your files. This way if there is a problem during the upload, you don't have to re-upload objects that have alreaady been successfully uploaded.
An example of how to do this is as follows:
$ swiftly --auth-url="https://identity.api.rackspacecloud.com/v2.0" \
--auth-user="{username}" --auth-key="{api_key}" --region="DFW" \
--concurrency=10 put container_name -i images/
If there is a network interruption while uploading, or you have to stop/restart uploading your files, you can add the "--different" option after the 'put' in the above command. This will tell Swiftly to HEAD the object first and only upload if the time or size of the local file does not match its corresponding object, skipping objects that have already been uploaded.
Swiftly can be found on github here: https://github.com/gholt/swiftly
There are other clients that possibly do the same things, but I know Swiftly works, so I recommend it.