How to create an efficient static hash table? - hashtable

I need to create small-mid sized static hash tables from it. Typically, those will have 5-100 entries. When the hash table is created, all keys hashes are known up-front (i.e. the keys are already hashes.) Currently, I create a HashMap, which is I sort the keys so I get O(log n) lookup which 3-5 lookups on average for the sizes I care. Wikipedia claims that a simple hash table with chaining will result in 3 lookups on average for a full table, so that's not yet worth the trouble for me (i.e. taking hash%n as the first entry and doing the chaining.) Given that I know all hashes up-front, it seems to be that there should be an easy way to get a fast, static perfect hash -- but I couldn't find a good pointer how. I.e. amortized O(1) access with no (little?) additional overhead. How should I implement such a static table?
Memory usage is important, so the less I need to store, the better.
Edit: Notice that it's fine if I have have to resolve one collision or so manually. I.e. if I could do some chaining which on average has direct access and worst-case 3 indirections for instance, that's fine. It's not that I need a perfect hash.

For c or c++ you can use gperf
GNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only.
GNU gperf is highly customizable. There are options for generating C or C++ code, for emitting switch statements or nested ifs instead of a hash table, and for tuning the algorithm employed by gperf.

Small hashes are also possible in C without an external lib using the pre-processor, for example:
swich (hash_string(*p))
{
case HASH_S16("test"):
...
break;
case HASH_S256("An example with a long text!!!!!!!!!!!!!!!!!"):
...
break;
}
Have a look for the code # http://www.heeden.nl/statichashc.htm

You can use Sux4j to generate a minimal perfect hash in Java or C++. (I'm not sure you are using Java, but you mentioned HashMap, so I'm assuming.) For C, you can use the cmph library.

Related

What are some ways to prevent deliberate malicious attacks against hash function implementations?

Say you have some software server that uses hash functions and some external source wants to exploit that and it keeps attacking the server using keys that they know (or with high probability) will result in collisions. How would you prevent this in practice?
I think one way is to choose the hash function randomly at the beginning of the problem, but this method seems slow in the sense that every time you change hash functions you have to rehash everything.
As you obviously realise, the best defence is to make sure they don't know what your hash function will produce - ideally not your bucket count either (if the hash function is strong, hard to reverse and produces a large range of outputs - such as say 64-bit unsigned integers - then finding two keys that produce the same hash may be time consuming, but finding a value that will hash to a specific bucket after modding by N only needs on average N attempts with any random, distinct keys).
choose the hash function randomly at the beginning of the problem, but this method seems slow in the sense that every time you change hash functions you have to rehash everything.
There's not necessarily a need to repeatedly change the hash function... you just need to make it unguessable based on exposed data/code and observable behaviours. For example, you might generate a random seed value on your server, write that to a secure file somewhere, and use it as a seed for your hash function (or if your hash function doesn't support a seed value, just XOR the hash output with the random value). Even if someone knows your hash function, if they don't know the seed then they can't engineer collisions.
You could also count the collisions a particular client has had, and if it's obviously malicious - disconnect them and remove their keys.

Is there a universal function F that F(sha(a),sha(b)) = sha(ab)

I am faced with a need to send my data in parts, and at the same time I am expected to provide sha256 for my WHOLE data.
Something like this cat large file | chunker | receiver
where receiver is an application that is expected to receive the data, possibly in chunks having in the header sha256 of the payload, and then following payload. After collecting all chunks, it is supposed to store the whole transmitted data, and the sha256 of all data (particular sha256 will be used only to rehash and confirm integrity of the data.)
Of course, the simplest thing would be if the receiver generated sha256 from whole the streamed data, but I was wondering if there is a simpler way by collecting all hashes of all chunks, and combine them to generate one final hash, which would be the same as the hash calculated from all the data.
In other words - and I copy this from the title - I wonder if there is a function F that would receive a list of hashes of chunks of data, and then generated final hash that would be equal to the hash generated on all the data.
And again, in other words, having this formula:
F(sha256(data[0]), sha256(data[1]), ... sha256(data[N])) = sha256(data[0..N])
What would be the function F?
Would it be a universal function or there is no such thing for the way hashing is calculated?
I suspect there is no such function or this is too complicated question to answer.
AFAIK there are still no known collisions for SHA-256 but I bet that once some is found, i.e. someone finds two messages m1 and m2 such that SHA-256(m1) = SHA-256(m2), then for almost any prefix a hashes SHA-256(a || m1) and SHA-256(a || m2) will be different i.e. the function you ask is actually not a function (has different outputs for the same inputs). Or to put it otherwise SHA-2 is susceptible to length extension attacks but AFAIK not to prefixing attacks. Still even if this actually a function, it is not enough for you for such a function to exist, you also want it to be fast. And I believe there is no such fast to compute function.
On the other hand SHA-256 works by splitting the original message into 512-bit chunks and processing them using a well defined process (which is based on the state from all the previous chunks) so theoretically you can modify some implementation of SHA-256 to compute two hashes at the same time (by applying the same logic to different initial states):
Hash of your application-defined chunk (using standard initial state)
Hash of all chunks up to this point (using the state passed from the previous output of the same step as the initial state).
This probably will be slightly faster than doing those things independently but I don't know whether it will be so much faster to justify such a custom implementation.

Hash Table Implementation - alternatives to collision detection

Other than collision detection and throwing a LinkedList in a hashtable, what are some other ways that a Hash Table can be implemented? Is collision detection the only way to achieve an efficient hash table?
Ultimately a finite sized hash table is going to have collisions, at least any generally programmed one. If your key is type string then the hash table has an infinite number of possible keys, but with a hash table, you have just a finite number of buckets. So fundamentally there has to be collisions. If you were to implement a hash table where it ignores collisions, then you would have a very strange, indeterministic data structure that would appear to remove elements at random.
Now, the data structure used on the backend doesn't have to be a linked list. You could implement it as a red-black tree and get log(n) performance out of a collision. You should checkout the article 5 Myths About Hash Tables and also this Stack Overflow question about HashMaps vs Maps.
Now, if you know something about you key type, say the key is a 2 character long string, then there are only a finite number of possible keys, you can then proceed to create a "hash" function that converts the key to a relatively small integer, you could create a look-up table that is guaranteed to not have collisions.
It is important to note that a well-implemented hash table will not suffer very much from collisions. There are bigger problems in the world like world hunger (or even how to implement an efficient hash function) than the computer having to traverse three nodes in a linked list once every 5 days.
Other than collision detection and throwing a LinkedList in a hashtable, what are some other ways that a Hash Table can be implemented?
Other ways include:
having another container type linked from the nodes where elements have collided, such as a balanced binary tree or vector/array
GCC's hash table underpinning std::unordered_X uses a single singly-linked list of values, and a contiguous array of buckets container iterators into the list; that's got some great characteristics including optimal iteration speed regardless of the current load_factor()
using open addressing / closed hashing, which - when an insert/find/erase finds another key in the bucket it has hashed to, uses some algorithm to find another bucket to look in instead (and so on until it finds the key, a deleted element it can insert over, or an unused bucket); there are a number of options for this kind of "probing", the simplest being a try-the-next-bucket approach, another being quadratic 1, 4, 9, 16..., another the use of alternative hash functions.
perfect hash functions (below)
Is collision detection the only way to achieve an efficient hash table?
sometimes it's possible to find a perfect hash function that won't have collisions, but that's generally only true for very limited input sets, whether due to the nature of the inputs (e.g. month and year of birth of living people only has order-of a thousand possible values), or because a small number are known at compile time (e.g. a set of 200 keywords for a compiler).

Using hash as a bucket index, modulo vs bit-mask

I've been looking into hash tables, where some data is hashed and that is used for a bucket index.
Some libraries use the modulo of the hash with the bucket size, and others use a bit-mask.
Where only the bits used by the bucket mask are used (ensuring the range is not exceeded).
bitmask:
index = h->hash_func(key) & h->hash_mask;
modulo:
index = h->hash_func(key) % h->bucket_tot;
While there are obvious differences between the two, such as bucket size constraints with bit-masks, ensuring hashing gives good distribution on lower bits, speed of modulo... etc.
Are there strong reasons to choose one over another?
(I'll probably try & benchmark for my own use-case, but curious whats already known on the matter).
Note, this is simply for key:value store, (dictionary/hash/associative-array) and not security related.
Example of a dynamic resizing, chaining hash table implementation using bit-mask:
https://github.com/amadvance/tommyds/blob/master/tommyds/tommyhashdyn.c
https://github.com/GNOME/glib/blob/master/glib/ghash.c
Example using modulo:
https://www.daniweb.com/software-development/c/threads/104887/sucinct-example-of-hash-table-w-chaining
You mentioned "bucket" index so I assume you mean hash tables with separate chaining as collision resolution, in this case there is no reasons for using modulo or bit mask "stronger" that you mentioned (which BTW not so obvious, as you said).
In some languages, most notably Java/JVM-based, array index is positive signed 32-bit integer, thus maximum array size for bit mask is 2^30, that could be insufficient and a strong reason to use no-power-of-two table size and modulo, with which you can approach 2^31-1(max possible signed 32-bit integer) very closely. But since you used C++ syntax this shouldn't be a concern for you.
Also, if you meant not only separate chaining, some open addressing collision resolution algorithms require table size to meet certain conditions, for example, if you implement double hashing, table size should be prime. In this case you obviously should use only modulo to obtain the initial index in the table.
It isn't always just about performance either, sometimes it's about the domain of your problem. You may, for example, have a mask that wishes to hash negative numbers as well. With modulo you have to write special cases to handle them, not so with a bitmask.

What's the point of a hash table?

I don't have experience with hash tables outside of arrays/dictionaries in dynamic languages, so I recently found out that internally they're implemented by making a hash of the key and using that to store the value. What I don't understand is why aren't the values stored with the key (string, number, whatever) as the, well, key, instead of making a hash of it and storing that.
This is a near duplicate: Why do we use a hashcode in a hashtable instead of an index?
Long story short, you can check if a key is already stored VERY quickly, and equally rapidly store a new mapping. Otherwise you'd have to keep a sorted list of keys, which is much slower to store and retrieve mappings from.
what is hash table?
It is also known as hash map is a data structure used to implement an associative array.It is a structure that can map keys to values.
How it works?
A hash table uses a hash function to compute an index into an array of buckets or slots, from which the correct value can be found.
See the below diagram it clearly explains.
Advantages:
In a well-dimensioned hash table, the average cost for each lookup is independent of the number of elements stored in the table.
Many hash table designs also allow arbitrary insertions and deletions of key-value pairs.
In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure.
Disadvantages:
The hash tables are not effective when the number of entries is very small. (However, in some cases the high cost of computing the hash function can be mitigated by saving the hash value together with the key.)
Uses:
They are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches and sets.
What I don't understand is why aren't the values stored with the key (string, number, whatever) as the, well, key
And how do you implement that?
Computers know only numbers. A hash table is a table, i.e. an array and when we get right down to it, an array can only addressed via an integral nonnegative index. Everything else is trickery. Dynamic languages that let you use string keys – they use trickery.
And one such trickery, and often the most elegant, is just computing a numerical, reproducible “hash” number of the key and using that as the index.
(There are other considerations such as compaction of the key range but that’s the foremost issue.)
In a nutshell: Hashing allows O(1) queries/inserts/deletes to the table. OTOH, a sorted structure (usually implemented as a balanced BST) makes the same operations take O(logn) time.
Why take a hash, you ask? How do you propose to store the key "as the key"? Ask yourself this, if you plan to store simply (key,value) pairs, how fast will your lookups/insertions/deletions be? Will you be running a O(n) loop over the entire array/list?
The whole point of having a hash value is that it allows all keys to be transformed into a finite set of hash values. This allows us to store keys in slots of a finite array (enabling fast operations - instead of searching the whole list you only search those keys that have the same hash value) even though the set of possible keys may be extremely large or infinite (e.g. keys can be strings, very large numbers, etc.) With a good hash function, very few keys will ever have the same hash values, and all operations are effectively O(1).
This will probably not make much sense if you are not familiar with hashing and how hashtables work. The best thing to do in that case is to consult the relevant chapter of a good algorithms/data structures book (I recommend CLRS).
The idea of a hash table is to provide a direct access to its items. So that is why the it calculates the "hash code" of the key and uses it to store the item, insted of the key itself.
The idea is to have only one hash code per key. Many times the hash function that generates the hash code is to divide a prime number and uses its remainer as the hash code.
For example, suppose you have a table with 13 positions, and an integer as the key, so you can use the following hash function
f(x) = x % 13
What I don't understand is why aren't
the values stored with the key
(string, number, whatever) as the,
well, key, instead of making a hash of
it and storing that.
Well, how do you propose to do that, with O(1) lookup?
The point of hashtables is basically to provide O(1) lookup by turning the key into an array index and then returning the content of the array at that index. To make that possible for arbitrary keys you need
A way to turn the key into an array index (this is the hash's purpose)
A way to deal with collisions (keys that have the same hash code)
A way to adjust the array size when it's too small (causing too many collisions) or too big (wasting space)
Generally the point of a hash table is to store some sparse value -- i.e. there is a large space of keys and a small number of things to store. Think about strings. There are an uncountable number of possible strings. If you are storing the variable names used in a program then there is a relatively small number of those possible strings that you are actually using, even though you don't know in advance what they are.
In some cases, it's possible that the key is very long or large, making it impractical to keep copies of these keys. Hashing them first allows for less memory usage as well as quicker lookup times.
A hashtable is used to store a set of values and their keys in a (for some amount of time) constant number of spots. In a simple case, let's say you wanted to save every integer from 0 to 10000 using the hash function of i % 10.
This would make a hashtable of 1000 blocks (often an array), each having a list 10 elements deep. So if you were to search for 1234, it would immediately know to search in the table entry for 123, then start comparing to find the exact match. Granted, this isn't much better than just using an array of 10000 elements, but it's just to demonstrate.
Hashtables are very useful for when you don't know exactly how many elements you'll have, but there will be a good number fewer collisions on the hash function than your total number of elements. (Which makes the hash function "hash(x) = 0" very, very bad.) You may have empty spots in your table, but ideally a majority of them will have some data.
The main advantage of using a hash for the purpose of finding items in the table, as opposed to using the original key of the key-value pair (which BTW, it typically stored in the table as well, since the hash is not reversible), is that..
...it allows mapping the whole namespace of the [original] keys to the relatively small namespace of the hash values, allowing the hash-table to provide O(1) performance for retrieving items.
This O(1) performance gets a bit eroded when considering the extra time to dealing with collisions and such, but on the whole the hash table is very fast for storing and retrieving items, as opposed to a system based solely on the [original] key value, which would then typically be O(log N), with for example a binary tree (although such tree is more efficient, space-wise)
Also consider speed. If your key is a string and your values are stored in an array, your hash can access any element in 'near' constant time. Compare that to searching for the string and its value.

Resources