How to order by firebase storage list - firebase

Can not find any function in firebase storage to list all files in a specific order like ascending or descending, i have tried listoptions but it supports only two arguments maxResults and pageToken

The Cloud Storage List API does not have the ability to sort anything by some criteria you choose. If you need the ability to query objects in a bucket, you should consider also storing information about your objects in a database that can be queried with the flexibility that you require. You will need to keep that database up to date as the contents of your bucket change (perhaps using Cloud Functions triggers). This is a common thing to implement, since Cloud Storage is optimized only for storage huge amounts of data for fast retrieval at extremely low costs - it is not also trying to be a database for object metadata.
Please also see:
gsutil / gcloud storage file listing sorted date descending?

Related

How to create Firebase Blob from Dart List

I have a very large List defined in Dart - 100,000 integers. I want to now create a Firebase document that will contain the List as a Blob. I do not want any of the list entries to be indexed by Firebase or for Firebase to do any analysis of the list. As far as I know I will need to define this as an array on my Firebase console. Will this lead to analysis of the list by Firebase? How do I create the document in Dart to ensure that the blob is not analyzed?
Thank you.
Firestore automatically creates an index for every field in your documents. You can exempt a field from being auto-indexed in your rules, either in the indexes panel in the Firebase console, or in your your rules file.
For example, here are the exemptions I have on one of my projects, where I have very large hash and count map fields in my geoindexes collection:

cosmos database to be indexed in pull approach + files

I have items and files. There is a 1:m relationship between items and files. Items are stored in a relational database and files in folders. The association between items and files is stored in the relational database. Files can be pdfs, word docs, email etc. I intend to POC cognitive search to be able to search items and associated documents.
My current understanding is, that a pull approach might be cheaper in comparison to the push approach when using cognitive search (the latency requirements are not stringent and eventual consistency is OK). Hence, I intend to move the data into a cosmos database, which can then be indexed via the pull approach. Curious, how does this work with the documents? Would I need to crack them on prem?
There is also the option of attachments and blob storage of documents. The latter is most likely more future proofed. I would think that if I put documents into blob storage, cognitive search indexing would still need to crack the documents and apply skills?
This sounds like a good approach. In terms of data sources, Cognitive Search supports CosmosDB and blob storage and some relationship databases. I would probably:
Create a new Cognitive Search resource in the Azure portal.
In that Cognitive Search resource, click "Import data" to create a new indexer (this is the "pull" option that you mention above). You may want to do this twice, assuming that your items are in CosmosDB or a relational DB, and your documents are stored separately in blob storage.
The first indexer has a data source which points to your items/relationship data in whatever DB you decide to put them, applies any skills that you want, and puts everything in an index.
The second indexer has a different data source which points to your documents in blob storage, applies any skills that you want, and puts everything in the same index.
If you use indexers, they will take care of the document cracking. If you push data directly into the index, you will need to crack the documents yourself.
This gives a simple walkthrough of creating an indexer with the portal (skillset is optional, and change the data source to your own data): https://learn.microsoft.com/en-us/azure/search/cognitive-search-quickstart-blob

Resolve FK in firestore

I have some documents in firestore have some fields in it. like collection "details" looks like this
{
id: "",
fields1: "",
userFK: Reference to users collection
}
Now I need to resolve userFK on the fly means that I don't want first fetch all the documents then query to userFk userFK.get()
Is there any method, its like doing a $lookup whick is supported in mongodb
Even In some case I want to fetch documents from "details" collection based of some specific fields in users
There is no way to get documents of multiple types from Firestore with a single read operation. To get the user document referenced by userFK you will have to perform a separate read operation.
This is normal when using NoSQL databases like Cloud Firestore, as they typically don't support any server-side equivalent of a SQL JOIN statement. The performance of loading these additional details is not as bad as you may think though, so be sure to measure how long it takes for your use-case before writing it off as not feasible.
If this additional load is prohibitive for a scenario, an alternative is to duplicate the necessary data of the user into each details document. So instead of only storing the reference to their document, you'd for example also store the user name.
This puts more work on the write operation, but makes the read operations simpler and more scalable. This is the common trade-off of space vs time, where in NoSQL databases you'll often find yourself trading time for space: so storing duplicate data.
If you're new to NoSQL data modeling, I highly recommend:
NoSQL data modeling
Getting to know Cloud Firestore

Using both Firebase Realtime Database and Firestore with same ID

Like the title suggests, I have a use case where I will write data to both firestore and realtime database. I am using the realtime database for operations that require live feedback to users and firestore to store data that will not really change but can be queried for more complex operations later on.
Due to my need of both databases, I would like to use the same UID when creating data in both databases to make it easy to retrieve in the future. The issue I have is determining which generated ID will satisfy the other service.
My thought process is since Realtime Database push ID is based on timestamp, it could create hot partitions for Firestore so indexing performance as data grows could get hurt in the future if I used the same ID there. But if I use firestore's generated ID in the realtime database, I will not have the data in the sorted fashion that realtime database creates pushed data.
I was wondering what solutions people used to tackle this use case and what options are available to me. Thanks!
If you need to order data, then simply store timestamps as fields instead of depending on the time-based sort order of Realtime Database push IDs. You can do this easily in both databases. Firestore makes obsolete the idea that unique IDs have any meaning other than simply being unique.
If you make sure your unique ID's are truly random like Firestore's, then you won't have any problems with indexing or writing documents.

Best way to convert this firebase real time database structure to firestore data structure?

I want to convert this firebase real time database structure to firestore data structure please do some help.
I want that structure like Posts(collection)/pin(collection)/pid(document)/then the post description , but i know that a collection can't contain another collection so how should i do?
All_Posts node contain pid and pin only to share that post and then get the post details using the pin and pid.
One more thing in my structure Posts-->734...(pin)-->pid-->then post details because i want to retrieve all the pids and the details under a pin .So should i do in this way or like Posts-->pids(which contain pin number)--> then fetch the details. Which one i should do?
Cloud Firestore Data model
Cloud Firestore is a NoSQL, document-oriented database. Unlike a SQL database, there are no tables or rows. Instead, you store data in documents, which are organized into collections.
Each document contains a set of key-value pairs. Cloud Firestore is optimized for storing large collections of small documents.
All documents must be stored in collections. Documents can contain subcollections and nested objects, both of which can include primitive fields like strings or complex objects like lists.
Collections and documents are created implicitly in Cloud Firestore. Simply assign data to a document within a collection. If either the collection or document does not exist, Cloud Firestore creates it.
Access the link to have more information about Cloud Firestore Data model
Your DB structure
In regards to your case scenario, you can have collections within other collections , these are called subcollections, as the example for a chat app shows here:
You can access these subcollections with the same collection ID by using Collection Group Queries.
Moving Data from Firebase Realtime Database to Cloud Firestore
For the sake of keeping this answer brief, check this link if you are planning on Moving data from Firebase Realtime Database to Cloud Firestore to consult best practices and recommendations.

Resources