Firebase Push keys as Firebase Storage file names? - firebase

I noticed that to use Firebase Storage (Google Cloud Storage) I need to come up with a unique file name to upload a file.
I then plan to keep a copy of that Storage file location (https URL or gs URL) in the Firebase Realtime database, where the clients will be able to read and download it separately
However I am unable to come up with unique filenames for the files located on Firebase Storage. Using a UUID generator might cause collisions in my case since several clients are uploading images to a single Firebase root
Here's my plan. I'd like to know if it will work
Lets call my firebase root : Chatrooms, which consists of keys : chatroom_1, chatroom_2 ...chatroom_n
under chatroom_k I have a root called "Content", which stores Push keys that are uniquely generated by Firebase to store content. Each push key represents a content, but the actual content is stored in Firebase Storage and a key called URL references the URL of the actual content. Can the filename for this content on Firebase storage have the same randomized Push key as long as the bucket hierarchy represents chatroom_k?

I am not sure if storage provides push() function but a suggestion would be the following:
Request a push() to a random location to your firebase database and use this key for a name.
At any case you will probably need to store this name to the database too.
In my application I have a node called "photos" and there I store the information about the images I upload. I first do a push() to get a new key and I use this key to rename the uploaded image to.
Is this what you need or I misunderstood something?

So I had the same problem and I reached this solution:
I named the files with time and date and the user uid, so it is almost impossible to have two files with the same name and they will be different every single time.
DateFormat dtForm = new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss.");
String date = dtForm.format(Calendar.getInstance().getTime());
String fileName = date + FirebaseAuth.getInstance().getCurrentUser().getUid();
FirebaseStorage
.getInstance()
.getReference("Folder/SubFolder/" + fileName)
.putFile(yourData)
With this the name of the files are going to be like this "2022.09.12.11.50.59.WEFfwds2234SA11" for example

Related

Using Airflows S3Hook is there a way to copy objects between buckets with different connection ids?

I'm copying files from an external companies bucket, they've sent me an access key/secret that I've set up as an env variable. I want to be able to copy objects from their bucket, I've used the below but that's for moving objects with the same connection, how do I use S3Hook to copy objects w. a different conn id?
s3 = S3Hook(self.aws_conn_id)
s3_conn = s3.get_conn()
ext_s3 = S3Hook(self.ext_aws_conn_id)
ext_s3 conn = ext_s3.get_conn()
#this moves objects w. the same connection...
s3_conn.copy_object(Bucket="bucket",
Key=f'dest_key',
CopySource={
'Bucket': self.partition.bucket,
'Key': key
}, ContentEncoding='csv')
From my point of view this is impossible. First of all, you can only declare one URL endpoint.
Secondly, Airflow S3Hook work with Boto3 in its background, and probably, both of your connections will have different acces_key and secret_key to create the boto3 resource/client. As explained in this post, if you wish to copy between different buckets, then you will need to use a single set of credentials that have:
GetObject permission on the source bucket
PutObject permission on the destination bucket
Again in the S3Hook, you can only declare a single set of credentials. You could maybe use the credentials given by your client and declare a bucket in your account with PutObject permission, but this will imply that you are allowed to do this in your enterprise (not very wise in terms of security), and even though your S3Hook will still only reference to one single endpoint.
To sum up, everything I have been dealing with the same problem and ended up creating two S3 connections using the first one for downloading from the original bucket and the second to upload to my enterprise bucket.

Azure access token for all files in sub directory

I have a secure Azure Blob set up as follows:
ContainerName > SubDirectory/FileName
E.g., /Photos/0000001/pic.png
Some of these sub directories contain thousands of files that all need to be rendered to a web page. Since the Blob is secured, I'm currently getting an access token for each individual file using GetSharedAccessSignature(...).
Is there a way I could instead get a single token that would grant access to all files within the sub directory ("/0000001/"), or is what I'm currently doing considered best practice?
You can only get Shared Access Signature for a blob container or for a single blob, but you are NOT able to get Shared Access Signature for a blob virtual directory, since directory isn't a real concept in Azure Blob Storage.
Not sure how I missed this, but here's how to get an access token for a directory:
var blob = storageService.Context.Container.ListBlobs().FirstOrDefault();
var policy =
new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10)
};
blob.Container.GetSharedAccessSignature(policy);
Newer versions support directory level access if hierarchical namespace is activated i.e. data lake. See MS Docs: Service SAS support for directory scoped access.

Can we insert a data into Firebase realtime Database?

One child node of my Firebase realtime database has become huge (aroung 20 GB) and I need to purge this and insert the the extracted data of last month from the backup into the Firebase realtime database using Python Admin SDK.
In the documentation, I see the following options:
set - Write or replace data to a defined path, like messages/users/
update - Update some of the keys for a defined path without replacing all of the data
push - Add to a list of data in the database. Every time you push a new node onto a list, your database generates a unique key, like messages/users//
transaction - Use transactions when working with complex data that could be corrupted by concurrent updates
However, I want to add/insert the data from the firebase backup. I have to insert because the app is used in production and I cannot afford the overwrite of data.
Is there any method available to insert/add the data and not overwrite the data?
Any help/support is greatly appreciated.
There is no way to do this in Firebase Realtime Database without reading the current value of each location.
The only operation that allows you to update data based on its existing value is a transaction. A Firebase transaction gives you the (likely) current value at a location, and you then return what the new value should become.
But if the data you're restoring is (largely) the same as the data you have in the database, you might be able to use an update() call with sufficiently deep paths.

How to link a Firebase Storage file to a Firebase Database entry?

I have users uploading files, and a Cloud Function responding by adding the uploaded file to the database, and planned on using the following path:
/files/{user-id}/{filename}
The reasoning being that if a file gets deleted, i can in the Cloud Function immediately get the reference to the database-reference.
However, i am not allowed to use certain characters in db-paths that are allowed in filenames (most specifically, a dot). How should this be set up, so that for a removed Storage-file I can immediately get the correct Database-path?
You could push() the path under /files/{uid} to create the entry, then orderByValue().equalTo(x) to find the entry later for deletion. This way, you won't have to worry about the contents of the file name.

how to stream a mp3 audio from firebase storage

I'm developing an app that will stream mp3 file stored in firebase storage. Nearly I'm having 100 songs like song1,song2,song3..... if a song is selected I have to stream that particular song without downloading. In such case I need to write plenty of code because for each song I have to mention the firebase storage url. the url would be like
https://firebasestorage.googleapis.com/......song1.mp3?alt=media&token=8a8a0593-e8bb-40c7-87e0-814d9c8342f3
For each song the alt=media&token= part of the url varies, so I have to mention the unique url for all songs. But here I need some simplified coding to play the songs by mentioning its name alone from firebase storage.
Please suggest a way to stream the audio file by using its name alone that is stored in firebase storage.
You have two choices if you want to get files out of a Firebase Storage bucket.
Use the full download url that you can get from a Storage Reference that points to the file path in your bucket.
Use a download task (Android or iOS) to fetch the data from the file.
You can't get the file data any other way from within a mobile app. Firebase Storage doesn't support special media streaming protocols, such as RTSP.
I did it using downloadUrl.
val storage = FirebaseStorage.getInstance()
storage.reference.child("songs/song1.mp3").downloadUrl.addOnSuccessListener({
val mediaPlayer = MediaPlayer()
mediaPlayer.setDataSource(it.toString())
mediaPlayer.setOnPreparedListener { player ->
player.start()
}
mediaPlayer.prepareAsync()
})
StorageReference filepath=storage.child("Audio").child(timeStamp+".mp3");
Uri uri=Uri.fromFile(new File(fileName));
filepath.putFile(uri).addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() {
#Override
public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) {
String audio_url=taskSnapshot.getDownloadUrl().toString();
Use audio_urlto stream the audio file using media player

Resources