What exactly are hashtables? - hashtable

What are they and how do they work?
Where are they used?
When should I (not) use them?
I've heard the word over and over again, yet I don't know its exact meaning.
What I heard is that they allow associative arrays by sending the array key through a hash function that converts it into an int and then uses a regular array. Am I right with that?
(Notice: This is not my homework; I go too school but they teach us only the BASICs in informatics)

Wikipedia seems to have a pretty nice answer to what they are.
You should use them when you want to look up values by some index.
As for when you shouldn't use them... when you don't want to look up values by some index (for example, if all you want to ever do is iterate over them.)

You've about got it. They're a very good way of mapping from arbitrary things (keys) to arbitrary things (values). The idea is that you apply a function (a hash function) that translates the key to an index into the array where you store the values; the hash function's speed is typically linear in the size of the key, which is great when key sizes are much smaller than the number of entries (i.e., the typical case).
The tricky bit is that hash functions are usually imperfect. (Perfect hash functions exist, but tend to be very specific to particular applications and particular datasets; they're hardly ever worthwhile.) There are two approaches to dealing with this, and each requires storing the key with the value: one (open addressing) is to use a pre-determined pattern to look onward from the location in the array with the hash for somewhere that is free, the other (chaining) is to store a linked list hanging off each entry in the array (so you do a linear lookup over what is hopefully a short list). The cases of production code where I've read the source code have all used chaining with dynamic rebuilding of the hash table when the load factor is excessive.

Good hash functions are one way functions that allow you to create a distributed value from any given input. Therefore, you will get somewhat unique values for each input value. They are also repeatable, such that any input will always generate the same output.
An example of a good hash function is SHA1 or SHA256.
Let's say that you have a database table of users. The columns are id, last_name, first_name, telephone_number, and address.
While any of these columns could have duplicates, let's assume that no rows are exactly the same.
In this case, id is simply a unique primary key of our making (a surrogate key). The id field doesn't actually contain any user data because we couldn't find a natural key that was unique for users, but we use the id field for building foreign key relationships with other tables.
We could look up the user record like this from our database:
SELECT * FROM users
WHERE last_name = 'Adams'
AND first_name = 'Marcus'
AND address = '1234 Main St'
AND telephone_number = '555-1212';
We have to search through 4 different columns, using 4 different indexes, to find my record.
However, you could create a new "hash" column, and store the hash value of all four columns combined.
String myHash = myHashFunction("Marcus" + "Adams" + "1234 Main St" + "555-1212");
You might get a hash value like AE32ABC31234CAD984EA8.
You store this hash value as a column in the database and index on that. You now only have to search one index.
SELECT * FROM users
WHERE hash_value = 'AE32ABC31234CAD984EA8';
Once we have the id for the requested user, we can use that value to look up related data in other tables.
The idea is that the hash function offloads work from the database server.
Collisions are not likely. If two users have the same hash, it's most likely that they have duplicate data.

Related

Best way to model high score data in DynamoDB

I believe this would be easier with PostgreSQL or MongoDB, both of which I'm familiar with, but I'm using DynamoDB with my project for the sake of learning how to use it and getting comfortable with it. I've never used it before.
I want to use DynamoDB to store high scores for my typing test project. There are 4 data attributes to be stored:
name (doesn't need to be unique)
WPM
number of errors
test type (because I have 2 different kinds of typing tests)
At first, my partition key was testType, and my sort key was WPM. Then I realized that if anyone got the same WPM as a previous user, it would overwrite the previous user's data, because testType and WPM, the two key components, were identical. So ties did not work.
So, now, name is my partition key, and WPM is my sort key. In order to filter by testType, I just use JS array filter methods. This still doesn't seem optimal though for multiple reasons. For my small typing test project, I think it's ok, but I can see that it's possible for 2 people to input the same name and get the same WPM and overwrite each other.
What would be a better way to set this up with DynamoDB?
Assuming you want the top X many WPM results for a given test type:
Set the partition key to be the test type. Set the sort key as <WPM>#<username>. Make sure to zero-pad the WPM so it’s always 3 digits even if the score is below 100. That keeps it numerically sorted.
With this key structure you have a sorted list (in the sort key) of all the scores for a given test type. You can Query against the test type and use ScanIndexForward=false to get descending high scores.
Notice how multiple identical scores by different usernames won’t overwrite each other. The username can be pulled from the returned sort key or from an attribute on the item, along with other metadata about the high score event.
If you have multiple users with the same username, well, that’s kinda weird. Presumably you have an internal identifier. You can use that as the suffix in the sort key instead of the username.

Querying on Global Secondary indexes with a usage of contains operator

I've been reading a DynamoDB docs and was unable to understand if it does make sense to query on Global Secondary Index with a usage of 'contains' operator.
My problem is as follows: my dynamoDB document has a list of embedded objects, every object has a 'code' field which is unique:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
]
}
I want to be able to get all documents that contain entities with entity.code = X.
For this purpose I'm considering adding a Global Secondary Index that would contain all entity.codes that are present in current db document separated by a comma. So the example above would look like:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
],
"entitiesGlobalSecondaryIndex":"entityCode1,entityCode2"
}
And then I would like to apply filter expression on entitiesGlobalSecondaryIndex something like: entitiesGlobalSecondaryIndex contains entityCode1.
Would this be efficient or using global secondary index does not make sense in this way and DynamoDB will simply check the condition against every document which is similar so scan?
Any help is very appreciated,
Thanks
The contains operator of a query cannot be run on a partition Key. In order for a query to use any sort of operators (contains, begins with, > < ect...) you must have a range attributes- aka your Sort Key.
You can very well set up a GSI with some value as your PK and this code as your SK. However, GSIs are replication of the table - there is a slight potential for the data ina GSI to lag behind that of the master copy. If the query you're doing against this GSI isn't very often, then you're probably safe from that.
However. If you are trying to do this to the entire table at once then it's no better than a scan.
If what you need is a specific Code to return all its documents at once, then you could do a GSI with that as the PK. If you add a date field as the SK of this GSI it would even be time sorted. If you query against that code in that index, you'll get every single one of them.
Since you may have multiple codes, if they aren't too many per document, you maybe could use a Sparse Index - if you have an entity with code "AAAA" then you also have an attribute named AAAA (or AAAAflag or something.) It is always null/does not exist Unless the entities contains that code. If you do a GSI on this AAAflag attribute, it will only contain documents that contain that entity code, and ignore all where this attribute does not exist on a given document. This may work for you if you can also provide a good PK on this to keep the numbers well partitioned and if you don't have too many codes.
Filter expressions by the way are different than all of the above. Filter expressions are run on tbe data that would be returned, after it is already read out of the table. This is useful I'd you have a multi access pattern setup, but don't want a particular call to get all the documents associated with a particular PK - in the interests of keeping the data your code is working with concise. The query with a filter expression still retrieves everything from that query, but only presents what makes it past the filter.
If are only querying against a particular PK at any given time and you want to know if it contains any entities of x, then a Filter expressions would work perfectly. Of course, this is only per PK and not for your entire table.
If all you need is numbers, then you could do a count attribute on the document, or a meta document on that partition that contains these values and could be queried directly.
Lastly, and I have no idea if this would work or not, if your entities attribute is a map type you might very well be able to filter against entities code - and maybe even with entities.code.contains(value) if it was an SK - but I do not know if this is possible or not

Optimal data struct for k-k-v mapping

I have a large mapping table with 1.4 billion records. The data struct is now like {<Key1, Key2>: List<Value>}.
Key1 and Value are from same set, let's say A, with ~0.1 billion unique elements.
Key2 are from another set, let's say B, with only 32 unique elements.
List<Value> is variable length list with up to 200 maximum elements.
Can someone recommend any better data structure or retrieval algorithm for quick online retrieval and proper space consumption.
You could use an extendible hash table for this:
https://en.wikipedia.org/wiki/Extendible_hashing
If you don't want to implement it yourself, then you could try using something like Redis or Memcached to serve as an external implementation of a persistable hash table.
To create the hashing key, just combine Key1 and Key2 (concatenate? xor?) and use that as a hash key.
If in RAM use a hash table with a dynamic array for your list. That should work well.
Unless you care about the order of the keys, hash tables should do the job.
If you want to get all Key1s associated to a Key2 you can do that as well by maintaining a separate hash table for that. Or if you're actually implementing this you could link the keys so that all keys that have Key2 in them, form a linked list.

What is a hash table that doesn't know the keys?

For example, if I create a dictionary in python I can use d.keys() to retrieve the keys.
What is a hash table/dictionary without this kind of access? Storage might be an issue and the keys may be of least importance.
Edit (clarification): I want a data structure that can access values through the key but doesn't know the key, only the hash. For example:
Hash Value
-----------------------------------------------------------------------
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae|hey!
c9fc5d06292274fd98bcb57882657bf71de1eda4df902c519d915fc585b10190|hello!
If I try and access the data structure with the key "this is a key", it will hash that and get "hello!". If I try to access it with the key "foo", I will get "hey!".
We cannot retrieve the keys from this hash table, but we can access the data. This would be useful in cases where storage is important.
Normally, this would be the table:
Hash Value Key
-------------------------------------------------------------------------------------
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae|hey! |foo
c9fc5d06292274fd98bcb57882657bf71de1eda4df902c519d915fc585b10190|hello!|this is a key
This is called a Set - in this case the value is the key, and implementations generally use the hashcode and equality operations on the items before adding them to the set.
Some implementations of Set can be sorted, generally those are referred to as SortedSet. Think of Set<T> as an equivalent to Dictionary<T,T> (and SortedSet<T> being approximate of SortedDictionary<T,T> in C# parlance.
Sorted variants are generally implemented using binary trees, whereas unsorted implementations use hashing tables. As the key is the value, most implementations only store the value itself.
Which platform / language are you using? Java?

Delete a key-value pair in BerkeleyDB

Is there any way to delete key-value pair where the key start with sub-string1 and ends with sub-string2 in BerkeleyDB without iterating through all the keys in the DB?
For ex:
$sub1 = "B015";
$sub2 = "5646";
I want to delete
$key = "B015HGUJJ75646"
Note: It is guaranteed that there will be only one key for the combination of $sub1 and $sub2.
This can be done by taking an iterator of the DB and checking every key for the condition, but that will be very in-efficient for large DBs. Is there any way to do it without iterating through the complete DB?
If you're using a RECNO database, you're probably out of luck. But, if you can use a BTREE, you have a couple of options.
First, and probably easiest is to iterate over only the portion of the database that makes sense. Assuming you're using the default key comparison function, you can use DB_SET_RANGE to position the starting cursor (iterator) at the start of your partial key string. In your example, this might be "B0150000000000". You then scan forwards with DB_NEXT, looking at each key in turn. When either you find the key you're looking for, or if the key you find doesn't start with "B015", you're done.
Another technique that could be applicable to your situation is to redefine the key comparison function. If, as you state, there is only one combination of $sub1 and $sub2, then perhaps you only need to compare those sections of the keys to guarantee uniqueness? Here's an example of a full string comparison (I'm assuming you're using perl, just from the syntax you supplied above) from https://www2.informatik.hu-berlin.de/Themen/manuals/perl/DB_File.html :
sub Compare
{
my ($key1, $key2) = #_ ;
"\L$key1" cmp "\L$key2" ;
}
$DB_BTREE->{compare} = 'Compare' ;
So, if you can rig things such that you're only comparing the starting and ending four characters, you should be able to drop the database iterator directly onto the key you're interested in.

Resources