Confusion about finding information in a hash table when there is a collision - hashtable

I understand that if there is a collision in a hashtable you have a few options of storing the data. You could use some prime number to linearly traverse the array until you find a free spot. You could also rehash the entire table into a larger array. I'm sure there are other ways. What I don't understand is if there is a collision in the first place how would you know which row of data is the one you were looking for? Would I just not allow duplicate keys to be used?

There's a big difference between a hash and a key (although they could sometimes be the same).
The key could be a very large number, a complex object consisting of many fields or anything really.
You apply your hash function to this key to get a hash.
So even if you disallow duplicate keys, you could still have duplicate hashes.
You often can't use your key as a hash directly because array indices are consecutive integers starting at 0, so it won't work if your key is too large, negative or not an integer, and you'll have to apply some sort of hash function.
If you want to store numbers between 1 and 10000, you would let the key be the number itself and could make the hash the remainder of the number divided by 1000 (and you'd thus have an array of size 1000 for the hash table).
Inserting 1001 will put it at index 1. If you try to insert 2001, it will also try to go to index 1 and you'll have a collision.
* The key could either be the entire value you want to store or only an identifier for it.

Related

Hash Table separate chaining restore insertion order after removal

Is it possible to restore the insertion order of elements to the Hash Table with separate chaining collision resolution while preserving O(n) removal time complexity?
For example, I store Linked Lists in an array. On each insertion, I calculate the index of the list to add in and put the new value to the end of this list. Later I remove some values by calculating index and removing it from the list with that index.
After these operations, I would like to iterate over the Hash Table in the same order the values were added.
If I store keys and values in additional 2 arrays I can save the insertion order, but removal will take O(n) time (while removing must first find the index of the key/value in the array what takes linear time)
Thanks a lot!
It is not possible to "iterate over the Hash Table in the same order the values were added". (For separate chaining as you describe, you can easily guarantee insertion-order iteration of each group of elements that collided at the same hash bucket, but that's a very different property.)
If I store keys and values in additional 2 arrays I can save the insertion order, but removal will take O(n) time (while removing must first find the index of the key/value in the array what takes linear time)
You can have your hash map record the index, so on removal you know which insertion-order-array position to remove. But, if you compact the array (moving trailing elements up one to fill the gap) when removing an element, you end up with O(N) behaviour anyway.
If there's little churn in your data - and the number of insertions done isn't much higher than the number of elements at the time of iteration, you could just overwrite the entries in the 2 additional arrays with sentinel values to say they can be ignored during iteration, without recompacting.
Ultimately though, you may be best off using a balanced binary tree keyed on a number that you increment when inserting. If the hash map stores iterators into that tree, you can remove elements easily. Most operations are O(logN).

Alternative to GUID with Scalablity in mind and Friendly URL

I've decided to use GUID as primary key for many of my project DB tables. I think it is a good practice, especially for scalability, backup and restore in mind. The problem is that I don't want to use the regular GUID and search for an alternative approach. I was actually interested to know what Pinterest i using as primary key. When you look at the URL you see something like this:
http://pinterest.com/pin/275001120966638272/
I prefer the numerical representation, even it it is stores as string. Is there any way to achieve this?
Furthermore, youtube also use a different kind of hashing technique which I can't figure it out:
http://www.youtube.com/watch?v=kOXFLI6fd5A
This reminds me shorten url like scheme.
I prefer the shortest one, but I know that it won't guarantee to be unique. I first thought about doing something like this:
DateTime dt1970 = new DateTime(1970, 1, 1);
DateTime current = DateTime.Now;
TimeSpan span = current - dt1970;
Result Example:
1350433430523.66
Prints the total milliseconds since 1970, But what happens if I have hundreds thousands of writes per second.
I mainly prefer the non BIGINT Auto-Increment solution because it makes a lot less headache to scale the DB using 3rd party tools as well as less problematic backup/restore functionality because I can transfer data between servers and such if I want.
Another sophisticated approach is to tailor the solution towards my application. In the database, the primary key will also contain the username (unique and can't be changed by the user), so I can combine the numerical value of the name with the millisecond number which will give me a unique numerical string. Because the user doesn't insert data as such a high rate, the numerical ID is guarantee to be unique. I can also remove the last 5 figures and still get a unique ID, because I assume that the user won't insert data at more than 1 per second the most, but I would probably won't do that (what do you think about this idea?)
So I ask for your help. My data is assumes to grow very big, 2TB a year with ten of thousands new rows each second. I want URLs to look as "friendly" as possible, and prefer not to use the 'regular' GUID.
I am developing my app using ASP.NET 4.5 and MySQL
Thanks.
Collision Table
For YouTube like GUID's you can see this answer. They are basically keeping a database table of all random video ID's they are generating. When they request a new one, they check the table for any collisions. If they find a collision, they try to generate a new one.
Long Primary Keys
You could use a long (e.g. 275001120966638272) as a primary key, however if you have multiple servers generating unique identifiers you'll have to partition them somehow or introduce a global lock, so each server doesn't generate the same unique identifier.
Twitter Snowflake ID's
One solution to the partitioning problem with long ID's is to use snowflake ID's. This is what Twitter uses to generate it's ID's. All generated ID's are made up of the following parts:
Epoch timestamp in millisecond precision - 41 bits (gives us 69 years with a custom epoch)
Configured machine id - 10 bits (gives us up to 1024 machines)
Sequence number - 12 bits (A local counter per machine that rolls over every 4096)
One extra bit is reserved for future purposes. Since the ID's use timestamp as the first component, they are time sortable (which is very important for query performance).
Base64 Encoded GUID's
You can use ShortGuid which encodes a GUID as a base64 string. The downside is that the output is a little ugly (e.g. 00amyWGct0y_ze4lIsj2Mw) and it's case sensitive which may not be good for URL's if you are lower-casing them.
Base32 Encoded GUID's
There is also base32 encoding of GUID's, which you can see this answer for. These are slightly longer than ShortGuid above (e.g. lt7fz44kdqlu5pt7wnyzmu4ov4) but the advantage is that they can be all lower case.
Multiple Factors
One alternative I have been thinking about is to introduce multiple factors e.g. If Pintrest used a username and an ID for extra uniqueness:
https://pinterest.com/some-user/1
Here the ID 1 is unique to the user some-user and could be the number of posts they've made i.e. their next post would be 2. You could also use YouTube's approach with their video ID but specific to a user, this could lead to some ridiculously short URL's.
The first, simplest and practical scenario for unique keys
is the increasing numbering sequence of the write order,
This represent the record number inside one database providing unique numbering on a local scale : this is the -- often met -- application level requirement.
Next, the numerical approach based on a concatenation of time and counters is commonly used to ensure that concurrent transactions in same wagons will have unique ids before writing.
When the system gets highly threaded and distributed, like in highly concurrent situations, do some constraints need to be relaxed, before they become a penalty for scaling.
Universally unique identifier as primary key
Yes, it's a good practice.
A key reference system can provide independence from the underlying database system.
This provides one more level of integrity for the database when the evoked scenario occurs : backup, restore, scale, migrate and perhaps prove some authenticity.
This article Generating Globally Unique Identifiers for Use with MongoDB
by Alexander Marquardt (a Senior Consulting Engineer at MongoDB) covers the question in detail and gives some insight about database and informatics.
UUID are 128 bits length. They introduce an amount of entropy
high enough to ensure a practical uniqueness of labels.
They can be represented by a 32 hex character strings.
Enough to write several thousands of billions of billions
of decimal number.
Here are a few more questions that can occur when considering the overall principle and the analysis:
should primary keys of database
and Unique Resource Location be kept as two different entities ?
does this numbering destruct the sequentiality in the system ?
Does providing a machine host number (h),
followed by a user number (u) and time (t) along a write index (i)
guarantee the PK huti to stay unique ?
Now considering the DB system:
primary keys should be preserved as numerical (be it hexa)
the database system relies on it and this implies performance considerations.
their size should be fixed,
the system must answer rapidly to tell if it's potentially dealing with a PK or not.
Hashids
The hashing technique of Youtube is hashids.
It's a good choice :
the hash are shorts and the length can be controlled,
the alphabet can be customized,
it is reversible (and as such interesting as short reference to the primary keys),
it can use salt.
it's design to hash positive numbers.
However it is a hash and as such the probability exists that a collision happen. They can be detected : unique constraint is violated before they are stored and in such case, should be run again.
Consider the comment to this answer to figure out how much entropy it's possible to get from a shorten sha1+b64 recipe.
To anticipate on the colliding scenario,
calls for the estimation of the future dimension of the database, that is, the potential number of records. Recommended reading : Z.Bloom, How Long Does An ID Need To Be ?
Milliseconds since epoch
Cited from the previous article, which provides most of the answer to the problem at hand with a nice synthetic style
It may not be necessary for you to encode every time since 1970
however. If you are only interested in keeping recent records close to
each other, you only need enough values to ensure that you don’t have
more values with the same prefix than your database can cache at once
What you could do is convert a GUID into only numeric by converting all the letters into numbers in the guid. Here is a example of what that would look like. It's abit long but if that is not a problem this could be one way of going about generating the keys.
1004234499987310234371029731000544986101469898102
Here is the code i used to generate the string above. But i would probably recommend you using a long primary key insteed although it can be abit of a pain it's probably a safer way to do it then the function below.
string generateKey()
{
Guid guid = Guid.NewGuid();
string newKey = "";
foreach(char c in guid.ToString().Replace("-", "").ToCharArray())
{
if(char.IsLetter(c))
{
newKey += (int)c;
}
else
{
newKey += c;
}
}
return newKey;
}
Edit:
I did some testing with only taking the 20 first numbers and out of 5000000 generated keys 4999978 was uniqe. But when using 25 first numbers it is 5000000 out of 5000000. I would recommend you to do some more testing if going with this method.

Hash table vs Hash list vs Hash tree?

What property makes Hash table, Hash list and Hash tree different from each other? Which one is used when? When is table superior than tree.
Hashtable: it's a data structure in which you can insert pairs of (key, value) in which the key is used to compute an hashcode that is needed to decide where to store the value associated with its key. This kind of structure is useful because calculating a hashcode is O(1), so you can find or place an item in constant time. (Mind that there are caveats and different implementations that change this performance slightly)
Hashlist: it is just a list of hashcodes calculated on various chunks of data. Eg: you split a file in many parts and you calculate a hashcode for each part, then you store all of them in a list. Then you can use that list to verify integrity of the data.
Hashtree: it is similar to a hashlist but instead of having a list of hashes you have got a tree, so every node in the tree is a hashcode that is calculated on its children. Of course leaves will be the data from which you start calculating the hashcodes.
Hashtable is often useful (they are also called hashmaps) while hashlists and hashtrees are somewhat more specific and useful for exact purposes..

What is a hash map in programming and where can it be used

I have often heard people talking about hashing and hash maps and hash tables. I wanted to know what they are and where you can best use them for.
First you shoud maybe read this article.
When you use lists and you are looking for a special item you normally have to iterate over the complete list. This is very expensive when you have large lists.
A hashtable can be a lot faster, under best circumstances you will get the item you are looking for with only one access.
How is it working? Like a dictionary ... when you are looking for the word "hashtable" in a dictionary, you are not starting with the first word under 'a'. But rather you go straight forward to the letter 'h'. Then to 'ha', 'has' and so on, until you found your word. You are using an index within your dictionary to speed up your search.
A hashtable does basically the same. Every item gets an unique index (the so called hash). You use this hash for lookups. The hash may be an index in a normal linked list. For instance your hash could be a number like 2130 which means that you should look at position 2130 in your list. A lookup at a known index within a normal list is very easy and fast.
The problem of the whole approach is the so called hash function which assigns this index to each item. When you are looking for an item you should be able to calculate the index in advance. Just like in a real dictionary, where you see that the word 'hashtable' starts with the letter 'h' and therefore you know the approximate position.
A good hash function provides hashcodes that are evenly distrubuted over the space of all possible hashcodes. And of course it tries to avoid collisions. A collision happens when two different items get the same hashcode.
In C# for instance every object has a GetHashcode() method which provides a hash for it (not necessarily unique). This can be used for lookups and sorting with in your dictionary.
When you start using hashtables you should always keep in mind, that you handle collisions correctly. It can happen quite easily in large hashtables that two objects got the same hash (maybe your overload of GetHashcode() is faulty, maybe something else happened).
Basically, a HashMap allows you to store items with identifiers. They are stored in a table format with the identifier being hashed using a hashing algorithm.
Typically they are more efficient to retrieve items than search trees etc.
You may find this helpful: http://www.relisoft.com/book/lang/pointer/8hash.html
Hope it helps,
Chris
Hashing (in the noncryptographic sense) is a blanket term for taking an input and then producing an output to identify it with. A trivial example of a hash is adding the sum of the letters of a string, i.e:
f(abc) = 6
Note that this trivial hash scheme would create a collision between the strings abc, bca, ae, etc. An effective hash scheme would produce different values for each string, naturally.
Hashmaps and hashtables are datastructures (like arrays and lists), that use hashing to store data. In a hashtable, a hash is produced (either from a provided key, or from the object itself) that determines where in the table the object is stored. This means that as long as the user of the hashtable is aware of the key, retrieving the object is extremely fast.
In a list, in comparison, you would need to in some way search through the list in order to find your sought object. This also represents the backside of hashtables, which is that it is very complicated to find an object in it without knowing the key, because where the object is stored in the table has no relevance to its value nor when it was inputed.
Hashmaps are similar to hashtables, but only one example of each object is stored in it (hence no key needs to be provided, the object itself is the key).
This is of course a very simple explanation, so I suggest you read in depth from this point on. I hope I didn't make any silly mistakes. =)
Hashmap is used for storing data in key value pairs. We can use a hashmap for storing objects in a application and use it further in the same application for storing, updating, deleting values. Hashmap key and values are stored in a bucket to a specific entry, this entry location is determined using Hashcode function. This hashcode function determines the hash where the value is stored. The detailed explanantion of how hashmap works is described in this video: https://youtu.be/iqYC1odZSNo
Hash maps saves a lot of time as compared to other search criteria. We have a hash key that corresponds to a hash code which further helps to find its index value. In terms of implementation, hash maps takes a string converts it into an integer and remaps it to convert it into an index of an array which helps to find the required value.
To go in detail we can look for handling collisions in hash maps. Like instead of using array we can go with the linked list.
There is a short video available to understand it.
Available here :
Implementation example --> https://www.youtube.com/watch?v=shs0KM3wKv8
Sample:
int hashCode(String s)
{
logic
}

Big O of Hash Table vs. Binary Search Tree

Which would take longer?
print all items stored in a binary search tree in sorted order or print all items stored in a hash table in sorted order.
It would take longer to print the items of a hash table out in sorted order because a hash table is never sorted correct? and a BST is?
You are correct. Hashtables are sorted by some hash function, not by their natural sort order, so you'd have to extract all the entries O(N) and sort them O(NlogN) whereas you can traverse a binary search tree in natural order in O(N).
Note however that in Java, for instance, there is a LinkedHashSet and LinkedHashMap which gives you some of the advantages of Hash but which can be traversed in the order it was added to, so you could sort it and be able to traverse it in that sorted order as well as extracting items by hash.
Correct, a hash table is not "sorted" in the way you probably want. Elements in hash tables are not quite fully sorted, usually, although the arrangement is often kind of in the neighborhood of a sort. But they are arranged according to the hash function, which is usually wildly different for similar phrases. It's not a sort by any metric a human would use.
If the main thing you are doing with your collection is printing it in sorted order, you're best off using some type of BST.
A binary search tree is stored in a way that if you do a depth first traversal, you will find the items in sorted order(assuming you have a consistent compare function). The Big O of simply returning items already in the tree would be the Big O of traversing the tree.
You are correct about hash tables, they are not sorted. In fact, in order to enumerate everything in a plain hash table, you have to check every bucket to see what is in there, pull it out, then sort what you get. Lots of work to get a sorted list out of that.
Correct, printing sorted data stored in a hash table would be slower because a hash table is not sorted data. It just gives you a quick way to find a particular item. In "Big O Notation" it is said that the item can be found in constant time, i.e. O(1) time.
On the other hand, you can find an item in a binary search tree in "logarithmic time" (O(log n)) because the data has already been sorted for you.
So if you goal is to print a sorted list, you are much better off having the data stored in a sorted order (i.e. a binary tree).
This brings up a couple of interesting questions. Is a search tree still faster considering the following?
Incorporating the setup time for both the Hash Table and the BST?
If the hash algorithm produces a sorted list of words. Technically, you could create a hash table which uses an algorithm which does. In which case the the speed of the BST vs the Hash table would have to come down to the amount of time it takes to fill the hash table in the sorted order.
Also check out related considerations of Skip List vs. Binary Tree: Skip List vs. Binary Tree

Resources