Most of us have seen that when we make purchase from Amazon like sites we get a Order Number or Purchase Number(of 10-12 digits) which looks like some random number. Similarly I want generate unique ids for large scale system. What is the best algorithm to generate it?
Some methods which I thought are not efficient or not applicable to large scale system
1) Generating string using rand function ( Array.new(12){rand(10)}.join) and checking
whole table whether it is exists. It is time consuming, inefficient and may struck
in infinite loop.
2) Using time-stamp - I think this cannot be used for large scale system because large
no. user can excess system at same time.
3) Combination of 1) & 2) also creates issue as second when it generates same 1)
Auto increment : I don't want to use.
Is there a reason why a normal UUID won't work? It will produce a longer id, but it's simple, it works, and most languages have generation code built in.
there must be some kind of database involved in this large scale system, so add an auto-incrementing integer to your database. Dish out these unique integers without
worry if some are never used.
Related
Meteor uses it's internal Random package to generate Mongo-Ids for the documents, where the used set of characters is defined as:
var UNMISTAKABLE_CHARS = "23456789ABCDEFGHJKLMNPQRSTWXYZabcdefghijkmnopqrstuvwxyz";
The method description for Random.id also states:
Return a unique identifier, such as "Jjwjg6gouWLXhMGKW", that is likely to be unique in the whole world.
which is defined for the default length of an Id (17 chars; each one of UNMISTAKABLE_CHARS).
Now I would like to use only the first N characters of the Id to shorten my URLs (which include the Ids to dynamically load pages that require a specific document, which is determined by the Id).
So if my original Id is
`v5sw59HEdX9KM5KQE`
I would like to use for example (consider a totally random-picked N=5 here):
{
_id:"v5sw59HEdX9KM5KQE",
short: "v5sw5"
}
as document schema and fetch the respective document by this Id using { short } as query in my Mongo.Collection.
Now my question is how many characters are satisfactory to prevent collision if an amount of documents (thus Ids) between 5000 to 10000 are to be considered.
Note: I have some tools on entropy calculation and all these values (character set, length of the original Ids, number of documents) in front of me but I don't know how to wire this all up to safely calculate N.
If I understand correctly, besides the normal 17 chars long id generated for your documents _id, you would like a shorter id so that typically url's look less scary when they contain that id.
In your example you truncate the id, hence creating an explicit association between your shorter id and the original document id.
This sounds like git shorten commit hash: How does Git(Hub) handle possible collisions from short SHAs?
You could follow a similar path, i.e. first determine an initial default length that is reasonable to avoid probable collision (as explained in Peter O.'s answer), but explicitly check for uniqueness server-side and increase the length of any new shorten version in case of collision, until it becomes unique again.
Generating identifiers at random already runs the risk, at least in theory, of generating a duplicate identifier. For the default length of MongoIDs (assuming there are 5517 of them), the chance of having a duplicate MongoID reaches 50% after generating almost 731156 billion random MongoIDs (see "Birthday problem"), so the chance of a duplicate is negligible in practice for most applications.
Shortening a random identifier will make the collision problem even worse. In this case, for an ID length of 5 characters (resulting in 555 or 503284375 different IDs), the chance of having a duplicate MongoID reaches 50% after generating only about 26415 random IDs.
Since it appears that you can't control how MongoIDs are generated as easily as you can control how shortened "unique IDs" are generated, one thing you can do is the following:
Create a document that assigns each MongoID to a uniquely assigned number (such as a monotonically increasing counter).
To make the numbers assigned this way "look random", feed each number to a so-called "full-period" linear congruential generator to get a unique but "randomized" number within the generator's period.
The numbers (encoded similarly to MongoID strings) can then serve as short identifiers for your purposes.
But consider whether you really want the short identifiers created this way to be predictable. Using short identifiers hardly achieves this predictability goal.
If you wish to go the route of using shortened MongoIDs, see "Birthday problem" for formulas you can use to estimate how many random numbers it takes for the risk of collision to remain tolerable.
For further information on how Meteor generates MongoIDs, see also this question; one of its answers includes a way you can have MongoDB generate MongoIDs on the server rather than have Meteor do so on the client. It appears, too, that Meteor doesn't check the MongoIDs it generates for uniqueness before inserting them into a document.
I would argue that if you want to avoid collisions on a small collection then you don't want to use random ids, but either go with fully deterministic IDs or at least reduce the randomness to something more controlled. Along those lines, another option for you to consider would be to use MONGO for idGeneration in your collection. Those IDs are generated following a known recipe. Accordingly you could take characters 1-4 and 12 of that ID and would get a guarantee for no hash collisions as long as no more than N documents are created in the same second, where N is the number of characters used in MongoIDs (which I don't know off hand).
If I have bunch of OTPs mixed and if I know all of their generation seeds (the OPT URI) can I group by source URI?
I have a use case there I need the system to be 100% blind to the data relationships that its passing around.
For example: Users enter OTPs from their smartphones instead of their logins it should become very difficult identify entries by one user. As data is exported of the system that has OPT seeds is it possible to reestablish entry's ownership?
That's possible, but with a big complexity. You will need to generate codes for all seeds you have and then find if there is any match.
Also, there is a chance to receive the same code for different seeds at some moment. To avoid this problem you can ask a user for several consecutive codes, this significantly decreases the possibility of codes matching just by case.
hashset underlaying data structure is hashtable .how it will identify duplicates and why it is good for if our frequent operation is search operation ?
It uses hash code of the object which is quickly computed integer. This hash code tries to be as even distributed over all potential object values as possible.
As a result it can distribute the inserted values into a array (hashtable) with very low probability of conflict. Then the search operation is quite quick - get the hash code, access the array, compare and get the value - usually constant time. The same actually happens for finding duplicates.
The conflicts of hash code are resolved as well - there can be potentially more values for the same entry within the hash table - there comes the equal into play. But they are rather rare so they don't affect average performance significantly.
I'm implementing a leaderboard which is backed up by DynamoDB, and their Global Secondary Index, as described in their developer guide, http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
But, two of the things that are very necessary for a leaderboard system is your position within it, and the total in a leaderboard, so you can show #1 of 2000, or similar.
Using the index, the rows are sorted the correct way, and I'd assume these calls would be cheap enough to make, but I haven't been able to find a way, as of yet, how to do it via their docs. I really hope I don't have to get the entire table every single time to know where a person is positioned in it, or the count of the entire table (although if that's not available, that could be delayed, calculated and stored outside of the table at scheduled periods).
I know DescribeTable gives you information about the entire table, but I would be applying filters to the range key, so that wouldn't suit this purpose.
I am not aware of any efficient way to get the ranking of a player. The dumb way is to do a query starting from the player with the highest point, move downward, keep incrementing your counter until you reach the target player. So for the user with lowest point, you might end up scanning the whole range.
That being said, you can still get the top 100 player with no problem (Leaders). Just do a query starting from the player with the highest point, and set the query limit to 100.
Also, for a given player, you can get 100 players around him with similar points. You just need do two queries like:
query with hashkey="" and rangekey <= his point, limit 50
query with hashkey="" and rangekey >= his point, limit 50
This was the exact same problem we were facing when we were developing our app. Following are two solutions we had come with to deal with this problem:
Query your index with scanIndex->false that will give you all top players (assuming your score/points key in range) with limit 1000. Then applying this mathematical formula y = mx+b where you can take 2 iteration, mostly 1 and last value to find out m and b, x-points, and y-rank. Based on this you will get the rank if you have user's points (this will not be exact rank value it would be approximate, google does the same if we search some thing in our mail it show
and not exact value in first call.
Get all the records and store it in cache until the next update. This is by far the best and less expensive thing we are using.
The beauty of DynamoDB is that it is highly optimized for very specific (and common) use cases. The cost of this optimization is that many other use cases cannot be achieved as easily as with other databases. Unfortunately yours is one of them. That being said, there are perfectly valid and good ways to do this with DynamoDB. I happen to have built an application that has the same requirement as yours.
What you can do is enable DynamoDB Streams on your table and process item update events with a Lambda function. Every time the number of points for a user changes you re-compute their rank and update your item. Even if you use the same scan operation to re-compute the rank, this is still much better, because it moves the bulk of the cost from your read operation to your write operation, which is kind of the point of NoSQL in the first place. This approach also keeps your point updates fast and eventually consistent (the rank will not update immediately, but is guaranteed to update properly unless there's an issue with your Lambda function).
I recommend to go with this approach and once you reach scale optimize by caching your users by rank in something like Redis, unless you have prior experience with it and can set this up quickly. Pick whatever is simplest first. If you are concerned about your leaderboard changing too often, you can reduce the cost by only re-computing the ranks of first, say, 100 users and schedule another Lambda function to run every several minutes, scan all users and update their ranks all at the same time.
I've decided to use GUID as primary key for many of my project DB tables. I think it is a good practice, especially for scalability, backup and restore in mind. The problem is that I don't want to use the regular GUID and search for an alternative approach. I was actually interested to know what Pinterest i using as primary key. When you look at the URL you see something like this:
http://pinterest.com/pin/275001120966638272/
I prefer the numerical representation, even it it is stores as string. Is there any way to achieve this?
Furthermore, youtube also use a different kind of hashing technique which I can't figure it out:
http://www.youtube.com/watch?v=kOXFLI6fd5A
This reminds me shorten url like scheme.
I prefer the shortest one, but I know that it won't guarantee to be unique. I first thought about doing something like this:
DateTime dt1970 = new DateTime(1970, 1, 1);
DateTime current = DateTime.Now;
TimeSpan span = current - dt1970;
Result Example:
1350433430523.66
Prints the total milliseconds since 1970, But what happens if I have hundreds thousands of writes per second.
I mainly prefer the non BIGINT Auto-Increment solution because it makes a lot less headache to scale the DB using 3rd party tools as well as less problematic backup/restore functionality because I can transfer data between servers and such if I want.
Another sophisticated approach is to tailor the solution towards my application. In the database, the primary key will also contain the username (unique and can't be changed by the user), so I can combine the numerical value of the name with the millisecond number which will give me a unique numerical string. Because the user doesn't insert data as such a high rate, the numerical ID is guarantee to be unique. I can also remove the last 5 figures and still get a unique ID, because I assume that the user won't insert data at more than 1 per second the most, but I would probably won't do that (what do you think about this idea?)
So I ask for your help. My data is assumes to grow very big, 2TB a year with ten of thousands new rows each second. I want URLs to look as "friendly" as possible, and prefer not to use the 'regular' GUID.
I am developing my app using ASP.NET 4.5 and MySQL
Thanks.
Collision Table
For YouTube like GUID's you can see this answer. They are basically keeping a database table of all random video ID's they are generating. When they request a new one, they check the table for any collisions. If they find a collision, they try to generate a new one.
Long Primary Keys
You could use a long (e.g. 275001120966638272) as a primary key, however if you have multiple servers generating unique identifiers you'll have to partition them somehow or introduce a global lock, so each server doesn't generate the same unique identifier.
Twitter Snowflake ID's
One solution to the partitioning problem with long ID's is to use snowflake ID's. This is what Twitter uses to generate it's ID's. All generated ID's are made up of the following parts:
Epoch timestamp in millisecond precision - 41 bits (gives us 69 years with a custom epoch)
Configured machine id - 10 bits (gives us up to 1024 machines)
Sequence number - 12 bits (A local counter per machine that rolls over every 4096)
One extra bit is reserved for future purposes. Since the ID's use timestamp as the first component, they are time sortable (which is very important for query performance).
Base64 Encoded GUID's
You can use ShortGuid which encodes a GUID as a base64 string. The downside is that the output is a little ugly (e.g. 00amyWGct0y_ze4lIsj2Mw) and it's case sensitive which may not be good for URL's if you are lower-casing them.
Base32 Encoded GUID's
There is also base32 encoding of GUID's, which you can see this answer for. These are slightly longer than ShortGuid above (e.g. lt7fz44kdqlu5pt7wnyzmu4ov4) but the advantage is that they can be all lower case.
Multiple Factors
One alternative I have been thinking about is to introduce multiple factors e.g. If Pintrest used a username and an ID for extra uniqueness:
https://pinterest.com/some-user/1
Here the ID 1 is unique to the user some-user and could be the number of posts they've made i.e. their next post would be 2. You could also use YouTube's approach with their video ID but specific to a user, this could lead to some ridiculously short URL's.
The first, simplest and practical scenario for unique keys
is the increasing numbering sequence of the write order,
This represent the record number inside one database providing unique numbering on a local scale : this is the -- often met -- application level requirement.
Next, the numerical approach based on a concatenation of time and counters is commonly used to ensure that concurrent transactions in same wagons will have unique ids before writing.
When the system gets highly threaded and distributed, like in highly concurrent situations, do some constraints need to be relaxed, before they become a penalty for scaling.
Universally unique identifier as primary key
Yes, it's a good practice.
A key reference system can provide independence from the underlying database system.
This provides one more level of integrity for the database when the evoked scenario occurs : backup, restore, scale, migrate and perhaps prove some authenticity.
This article Generating Globally Unique Identifiers for Use with MongoDB
by Alexander Marquardt (a Senior Consulting Engineer at MongoDB) covers the question in detail and gives some insight about database and informatics.
UUID are 128 bits length. They introduce an amount of entropy
high enough to ensure a practical uniqueness of labels.
They can be represented by a 32 hex character strings.
Enough to write several thousands of billions of billions
of decimal number.
Here are a few more questions that can occur when considering the overall principle and the analysis:
should primary keys of database
and Unique Resource Location be kept as two different entities ?
does this numbering destruct the sequentiality in the system ?
Does providing a machine host number (h),
followed by a user number (u) and time (t) along a write index (i)
guarantee the PK huti to stay unique ?
Now considering the DB system:
primary keys should be preserved as numerical (be it hexa)
the database system relies on it and this implies performance considerations.
their size should be fixed,
the system must answer rapidly to tell if it's potentially dealing with a PK or not.
Hashids
The hashing technique of Youtube is hashids.
It's a good choice :
the hash are shorts and the length can be controlled,
the alphabet can be customized,
it is reversible (and as such interesting as short reference to the primary keys),
it can use salt.
it's design to hash positive numbers.
However it is a hash and as such the probability exists that a collision happen. They can be detected : unique constraint is violated before they are stored and in such case, should be run again.
Consider the comment to this answer to figure out how much entropy it's possible to get from a shorten sha1+b64 recipe.
To anticipate on the colliding scenario,
calls for the estimation of the future dimension of the database, that is, the potential number of records. Recommended reading : Z.Bloom, How Long Does An ID Need To Be ?
Milliseconds since epoch
Cited from the previous article, which provides most of the answer to the problem at hand with a nice synthetic style
It may not be necessary for you to encode every time since 1970
however. If you are only interested in keeping recent records close to
each other, you only need enough values to ensure that you don’t have
more values with the same prefix than your database can cache at once
What you could do is convert a GUID into only numeric by converting all the letters into numbers in the guid. Here is a example of what that would look like. It's abit long but if that is not a problem this could be one way of going about generating the keys.
1004234499987310234371029731000544986101469898102
Here is the code i used to generate the string above. But i would probably recommend you using a long primary key insteed although it can be abit of a pain it's probably a safer way to do it then the function below.
string generateKey()
{
Guid guid = Guid.NewGuid();
string newKey = "";
foreach(char c in guid.ToString().Replace("-", "").ToCharArray())
{
if(char.IsLetter(c))
{
newKey += (int)c;
}
else
{
newKey += c;
}
}
return newKey;
}
Edit:
I did some testing with only taking the 20 first numbers and out of 5000000 generated keys 4999978 was uniqe. But when using 25 first numbers it is 5000000 out of 5000000. I would recommend you to do some more testing if going with this method.