Firebase push() and the main thread - firebase

I am confused about the functionality of push() in Firebase, even after reading the docs. They present the following code:
// Generate a reference to a new location and add some data using push()
var newPostRef = postsRef.push();
// Get the unique ID generated by push()
var postID = newPostRef.key();
Does push() conduct a server query to get a unique ID (thus lagging the main thread--which seems not smart), or does it simply create a "dirty" unique ID that is later checked for uniqueness against the master ledger in the server? The docs seem kind of unclear about the robustness of the ID so I want to make sure.

Firebase's push() method is a pure client-side method. It generates a key based on the current time (corrected for the last known offset of the local clock from the server) and a lot of random information. The key it generates is statistically guaranteed to be unique.
To learn more about these keys, see this blog post: The 2^120 Ways to Ensure Unique Identifiers.

Related

How does String id = db.collection("myCollection").document().getId() gives document Id without hitting Firestore database?

I read somewhere that db.collection("mycollection").document().getId(); gives document Id in mycollection without hitting cloudstore database. But how it is possible to create unique Id without knowing document id of already existing doucments or hitting couldstore?
The auto-ID that is generated when you call document() is a fairly basic UUID (universally unique identifier). Such identifier are statistically guaranteed to be unique. In my words: there is so much random information in there, that the chances of two calls generating the same value are infinitesimally small.
So Firestore doesn't actually call the server to check whether the ID it generates is unique. It instead relies on the mathematical properties of picking a single value out of a sufficiently large and random set to be very certain it is unique.

How do I stop Firebase from creating an additional nested object or how can I access the newly generated string?

Problem: Whenever I add an order to the orders array, an additional nested array element(-KOPWA...) gets added. I wouldn't mind except, I don't know how to access that nested string to access it's child nodes.
Example of database node for users below:
firebase.database().ref('users/'+userIdState+'/orders/'+<<unique numbervariable>>).push({
"order":{"test":"product","quantity":2}
});
I'm using the above code to push new json objects with a unique number to the firebase array. Still the nested array with the weird strings gets generated.
Can anyone help me understand how to either: create my own nested array with my own unique string or how to access the nested string that gets generated from firebase so I can access it's children nodes.
Multiple instances of nest arrays will be generated by users.
Any help is much appreciated.
Thanks,
Moe
You're experiencing this behaviour because Firebase's push is not the same as an array push. I recommend reading this article to understand how it works.
As for a solution, you can simply change push to set in your code. This will create the structure you were (presumably) expecting, that is
1:
order:
...
This is however potentially unsafe, if you allow concurrent writes (i. e. if the "unique number" in your example is not always unique).
Afaik Firebase recommends using push to safely create collections/"arrays". You can retrieve the generated key by calling the key property on the reference returned by push. Like this:
var ref = firebase.database().ref('users/'+userIdState+'/orders/'+<<unique numbervariable>>).push({
"order":{"test":"product","quantity":2}
});
var generatedKey = ref.key; // the value you're looking for
If you decide to use it, you can probably just drop the order number you have right now and just use the generated one.

How to entirely skip validation in simple schema and allow incomplete documents to be stored?

I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.

do document IDs in Meteor need to be random or just unique?

i'm migrating data from a rails system, and it would be really convenient to assign the migrated objects IDs like post0000000000001, etc.
i've read here
Creating Meteor-friendly id's in Mongo?
that Meteor creates random 17 character strings from
23456789ABCDEFGHJKLMNPQRSTWXYZabcdefghijkmnopqrstuvwxyz
which looks to be chosen to avoid possibly ambiguous characters (omits 1 and I, etc.)
do the IDs need to be random for some reason? are there security implications to being able to guess a Meteor document's ID?! or it is just an easy way of generating unique IDs?
Mongo seems fine with sequential ids:
http://docs.mongodb.org/manual/core/document/#the-id-field
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
so i would guess this would have to be a Meteor constraint if it exists.
The IDs just need to be unique.
Typically there is an element of order: Such as using integers, or timestamps, or something with sequentiality.
This can't work in Meteor since inserts can come from the client, they may be disconnected for a period, or clients clocks may be off/have varying latency. Also its not possible to know the previous _id (in the case of a sequential _id) at the time an _id is written owing to latency compensation (instant inserts).
The consequence of the lack of order in the DDP protocol is the decision to use entirely random ids. That is not to say you can't use your own _ids.
while there is a risk of a collision with this strategy it is minimal on the order of [number of docs in your collection]/[55^17] * 100 % or nearly impossible. In the event this occurs the client will temporarily insert it and cancel it once the server confirms the error with a Mongo Duplicate Key error.
Also when it comes to security with the other answer. It is not too much of an issue if the _id of the user is known. It is not possible to log in without a valid hashed login token or retrieve any information with it. This applies to the user collection only of course. If you have your own collection an easily guessable URL containing an id as a reference without publish method checks on the eligibility to read the data is a risk the high entropy random ids generated by Meteor can mitigate.
As long as they are unique it should be ok to use your own ids.
I am not an expert, but I suppose Mongo needs a unique ID so when it updates the document, it in fact creates a new version of the document of that same ID.
The real question is - I too whish to know - if we can change the ID without screwing Mongo mechanism and reliability, or we need to create a secondary attribute? (It can make a smaller index too I suppose)?
But me too, I can imagine that security wise, it is better if document IDs are difficult to guess, especially user IDs! Otherwise, could it be easy or possible to fake a user, knowing the ID? Anybody, correct me if I am wrong.
I don't think it's possible and desirable to change ID from Mongo.
But you can easily create a autoincrement ID with http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
function getNextSequence(name) {
var ret = db.counters.findAndModify(
{
query: { _id: name },
update: { $inc: { seq: 1 } },
new: true
}
);
return ret.seq;
}
I have created a package that does just that and that is configurable.
https://atmospherejs.com/stivaugoin/fluid-refno
var refNo = generateRefNo({
name: 'invoices', // default: 'counter'
prefix: 'I-', // default: ''
size: 5, // default: 5
filling: '0' // default: '0'
});
console.log(refNo); // output: "I-00001"
you now can use refNo to add in your document on Insert
maybe it will help you

how to generate unique id per user?

I have a webpage Default.aspx which generate the id for each new user after that the id will be subbmitted to database on button click on Default.aspx...
if onother user is also entering the same time the id will be the same ... till they press button on default.aspx
How to get rid of this issue...so that ... each user will be alloted the unique id ...
i m using the read write code to generate unique id ..
You could use a Guid as ids. And to generate an unique id:
Guid id = Guid.NewGuid();
Another possibility is to use an automatically incremented primary column in the database so that it is the database that generates the unique identifiers.
Three options
Use a GUID: Guid.NewGuid() will generate unique GUIDs. GUIDs are, of course, much longer than an integer.
Use intelocked operations to increment a shared counter. Interlocked.Increment is thread safe. This will only work if all the requests happen in the same AppDomain: either process cycling on a refresh of the code will create a new AppDomain and restart the count.
Use an IDENTITY column in the database. The database is designed to handle this, within the request that inserts the new row, use SCOPE_IDENTITY to select the value of the identity to update in memory data (ORMs should handle this for you). (This is SQL Server, other databases have equivalent functionality.)
Of there #3 is almost certainly best.
You could generate a Guid:
Guid.NewGuid()
Or you could let the database generate it for you upon insert. One way to do this is via a Sequence. See the wikipedia article for Surrogate Keys
From the article:
A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.
The Sequence/auto-incremented column option is going to be simpler, and easier to remember when manually querying your DB (during debugging), but the DBA at my work says he's gotten 20% increases in performance by switching to Guids. He was using Oracle, and his database was huge, though :)
I use a utility static method to generate id's, basically use the full datetime(including seconds) and generate a random number of say 3 or 4 characters and return the whole thing, then you can save it to the database.

Resources