The key is shown in the figure. Why do we need these keys? Can we create a database without using these keys?
A key like the -N4D... in your screenshot is generated when you call push(). Such keys are not requires, as you can also call set() to write data without generating the key. But these so-called push keys are a convenient way to generate ever-increasing keys automatically for chronological lists of data.
Also see these classic Firebase blog posts:
The 2^120 Ways to Ensure Unique Identifiers
Best Practices: Arrays in Firebase.
Related
Like the title suggests, I have a use case where I will write data to both firestore and realtime database. I am using the realtime database for operations that require live feedback to users and firestore to store data that will not really change but can be queried for more complex operations later on.
Due to my need of both databases, I would like to use the same UID when creating data in both databases to make it easy to retrieve in the future. The issue I have is determining which generated ID will satisfy the other service.
My thought process is since Realtime Database push ID is based on timestamp, it could create hot partitions for Firestore so indexing performance as data grows could get hurt in the future if I used the same ID there. But if I use firestore's generated ID in the realtime database, I will not have the data in the sorted fashion that realtime database creates pushed data.
I was wondering what solutions people used to tackle this use case and what options are available to me. Thanks!
If you need to order data, then simply store timestamps as fields instead of depending on the time-based sort order of Realtime Database push IDs. You can do this easily in both databases. Firestore makes obsolete the idea that unique IDs have any meaning other than simply being unique.
If you make sure your unique ID's are truly random like Firestore's, then you won't have any problems with indexing or writing documents.
Specifically, should/can one one think of 'Collections' as table names and 'Documents' as unique keys ?
Should you use auto generated key, Auth uid or user email as documents names ?
What are the pros and cons of each if any ?
-Thanks
Yes, collections very closely resemble table names, as they would represent entities in object-oriented perspective. The documents are unique since each must have a unique id, the ids are the unique keys that identify each instance of an entity. No document can share a firebase id with another of the same collection.
Auth id keys seem to be the best idea for user firebase id's as it will allow you to sync between the firebase Auth, and Firestore/Firebase database, right out of the box. This is what I usually prefer. I would use autogenerated id's for other objects which have not been integrated into any other Firebase service. Having a consistent user id for both Firebase Auth,Firestore masks thing quite easy, since I only need to know one id, to access both services from the client end.
I am inserting data in Firebase Realtime Database in a table with the above structure. The key of the data is auto-generated based on push. After several such entries are created, sometime due to certain conditions I may need to delete one of the entries. At the point of deleting the entry, I may know some of the values of the node that I want to delete like createdAt and createdForPostID. But I will not know the key as it was auto-generated using push feature of firebase database. A combination of createdAt and createdForPostID makes a unique combination and only one such entry should exist in the database.
What would be the most efficient way to identify the entry without having to retrieve the entire node at OUTBOUND?
The reason I am using push is because Firebase claims it to be efficient and not subject to write conflicts. I also rely on the auto-sorting by date/time offered by push.
If no efficient way can be found, then I will generate my own key using date/time stamp. But I am hoping that this is a problem that someone has solved before and hence can guide me.
Any suggestions are welcome.
You'll need to run a query to find the items that match your conditions.
Since you seem to have multiple properties in your conditions, and the Firebase Database can only query on a single property, you'll need to combine the values into a single property as shown here.
Then you can run a query on that combined property and delete the items it returns:
var query = ref.orderByChild("createForPostID-createdAt").equalTo("20171229_124904-20171230_200343");
query.once("value", function(snapshot) {
snapshot.forEach(function(child) {
child.ref.remove();
})
Given Frank's answer I realised, I needed to create a unique property as per his suggestion because I will need it to do the future query. But then it seemed that I may be better off using that unique property as the key instead of using push
So it seems from an overall perspective, it might be more efficient to create your own key instead of push, if the app needs both create and delete functions. Reliance on push makes sense only if data is being created and deletion is not a big functionality of your app.
So, in conclusion, for Firebase data, the most efficient way to do both data create and delete needs creation of a unique key on your own.
I thought Datastore's key was ordered by insertion date, but apparently I was wrong. I need to periodically look for new entities in the Datastore, fetch them and process them.
Until now, I would simply store the last fetched key and wrongly query for anything greater than it.
Is there a way of doing so?
Thanks in advance.
Datastore automatically generated keys are generated with uniform distribution, in order to make search more performant. You will not be able to understand which entity where added last using keys.
Instead, you can try couple of different approaches.
Use Pub/Sub and architecture your app so another background task will consume this last added entities. On entities add in DB, you will just publish new Event into Pub/Sub with key id. You event listener (separate routine) will receive it.
Use names and generate you custom names. But, as you want to create sequentially growing names, this will case performance hit on even not big ranges of data. You can find more about this in Best Practices of Google Datastore.
https://cloud.google.com/datastore/docs/best-practices#keys
You can add additional creation time column, and still use automatic keys generation.
I'm trying Amazon's tutorial on dynamoDB: http://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/GettingStarted.DDBLocal.html
As I'm working through it, I can't figure out how to do simple things like:
print the names of the tables I've created or figure out what the primary keys are in a particular table, t.
I'm assuming that there is probably some really easy way to do this, I just haven't seen it.
DynamoDBLocal is essentially a DynamoDB instance running on your own computer with its own endpoint. The way to interact with it is the same way you would with the actual DynamoDB service.
The easiest way to do that is choose an API and make requests with the local endpoint. See here for some basic examples of how to set the endpoint.
In your case, it sounds like you want to use a few different API operations, which the syntax will differ depending on which language/SDK you use:
ListTables - self-explanatory
Scan - "The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index".
DescribeTable - "Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table."
I have a fairly full example of a few operations using the Java SDK in this answer if you want some reference.