How to set DynamoDB range key, String or Map - amazon-dynamodb

I have a DynamoDB table with a primary hash key, and a range key. Range key will have two attributes. Say those attribute names are: name1, name2, with values value1, value2
Plan A: combine two attributes as string, use comma as delimiter
Primary hash key: id
Range key: value1,value2
Cons
1. comma may not work if some wired values contain this delimiter
Plan B: convert map as String for range key
Primary hash key: id
Range key: “{\“name1\”: \“value1\”, \“name2\”: \“value2\”}”
Cons
1. different SDK may result into different JSON String based on the same value? (Not sure), need to support multiple SDK read/write. Like Java and Ruby
So, which solution works better? Or there are any better suggestions?
Thanks!
Ray

You're on the right track. The AWS docs regarding key design promote your first suggestion, but it also has some warnings about the situation that you refered as cons.
I don't think that you could have problemas with different sdk parsers, but I also think that a little bit of precautions here would be a good ideia. So instead of directly parse a json to string using the sdk, I would manually concatenate the values using a custom function to generate a deterministic value like "name1-value1-name2-value2" or "name1:value1-name2:value2".

Related

Choosing Primary key for DynamoDB

A bit of context: I am trying to build an inventory to list my AWS resources in various accounts and I am planning to use DynamoDB to store the data. These will be the columns for my table: ResourceARN, ResourceName, ResourceType, StandardTag, IsDeleted, LastUpdateTime and ResourceCreationDate ( this field is available only for a few resource types like Ec2).
Question: I want to query my DDB table using account ID, resource type and tag name. I am stumped on choosing the primary key for the table. Since primary key should be unique and has to have 1:many relationship. Hence, I cannot use a combination of resourceType and account Id. Nor can I use resourceArn as my primary key since it is 1:1 relationship. Also, using the resourceARN as the sort key does not make sense to me. I understand that I can use a simple scan operation, but that is very costly and will take time if I add more data in my DDB.
I would appreciate any suggestions or guidance over the same.
Short answer
Partition key: Account ID
Sort key: <resource type>/<resource ID>
Rationale
It's a common pattern for a sort key to be a string concatenating multiple attributes. Since sort keys can be queried by prefix, you can leverage this in your queries:
Get all account resources: query all sort keys on the Account ID partition key
Get all EC2 instances of an account: query with partition key = <your account ID> and sort key begins_with('ec2-instance').
You may notice that ARNs follow such a hierarchy as well (what's probably not a coincidence). This would be effectively using a subset of the ARN as the sort key.
Some notes:
DynamoDB is about attributes as much as about columns. You don't need to include ResourceCreationDate in the records which don't have it, and doing so will save you space (see next point).
Attribute names count as storage for every record, which impacts cost and also throughput. It's common to use shorthand for names for this reason (rct instead of ResourceCreationTime for example).
You can use LSIs (Local Secondary Indexes) to order by creation and update times if you need this.

Invalid KeySchema: The second KeySchemaElement is not a RANGE key type

In my Cloudformation script, I'm creating a Dynamo DB table (Datasets) with two keys - let's call them CatalogId and DatasetId. They are both URIs that are outside of my control, but suffice it to say that together they make a unique ID.
I made both of them HASH keys in the primary KeySchema / index. When I did that, CF gave me the following error:
Invalid KeySchema: The second KeySchemaElement is not a RANGE key type
What am I doing wrong?
The answer is that only one of the keys can be a HASH key in the primary index. The second key must be of RANGE type, even if you never plan on comparing it with > or <. I'd love it if somebody could elaborate on why I can't have two HASH keys. Why doesn't Dynamo just concatenate the two keys internally and create one primary key?
As you mentioned, DynamoDB doesn't have that option. It expects the client to concatenate as String and store that value in one field (i.e. Hash key in the above case).
In case if you still need those attributes as separate fields, you can store that as a non-key attributes individually.
Q: Are composite attribute indexes possible?
No. But you can concatenate attributes into a string and use this as a key.
Example:-
First and last name as composite key
Concatenate first and last name and store that as hash key
Save the first name as a non-key attribute
Save the last name as a non-key attribute
I know it is little redundant. This is just an workaround to keep things clear.

What is a hash table that doesn't know the keys?

For example, if I create a dictionary in python I can use d.keys() to retrieve the keys.
What is a hash table/dictionary without this kind of access? Storage might be an issue and the keys may be of least importance.
Edit (clarification): I want a data structure that can access values through the key but doesn't know the key, only the hash. For example:
Hash Value
-----------------------------------------------------------------------
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae|hey!
c9fc5d06292274fd98bcb57882657bf71de1eda4df902c519d915fc585b10190|hello!
If I try and access the data structure with the key "this is a key", it will hash that and get "hello!". If I try to access it with the key "foo", I will get "hey!".
We cannot retrieve the keys from this hash table, but we can access the data. This would be useful in cases where storage is important.
Normally, this would be the table:
Hash Value Key
-------------------------------------------------------------------------------------
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae|hey! |foo
c9fc5d06292274fd98bcb57882657bf71de1eda4df902c519d915fc585b10190|hello!|this is a key
This is called a Set - in this case the value is the key, and implementations generally use the hashcode and equality operations on the items before adding them to the set.
Some implementations of Set can be sorted, generally those are referred to as SortedSet. Think of Set<T> as an equivalent to Dictionary<T,T> (and SortedSet<T> being approximate of SortedDictionary<T,T> in C# parlance.
Sorted variants are generally implemented using binary trees, whereas unsorted implementations use hashing tables. As the key is the value, most implementations only store the value itself.
Which platform / language are you using? Java?

DynamoDB ordered list

I'm trying to store a List as a DynamoDB attribute but I need to be able to retrieve the list order. At the moment the only solution I have come up with is to create a custom hash map by appending a key to the value and converting the complete value to a String and then store that as a list.
eg. key = position1, value = value1, String to be stored in the DB = "position1#value1"
To use the list I then need to filter out, organise, substring and reconvert to the original type. It seems like a long way round but at the moment its the only solution I can come up with.
Does anybody have any better solutions or ideas?
The List type in the newly added Document Types should help.
Document Data Types
DynamoDB supports List and Map data types, which can be nested to represent complex data structures.
A List type contains an ordered collection of values.
A Map type contains an unordered collection of name-value pairs.
Lists and maps are ideal for storing JSON documents. The List data type is similar to a JSON array, and the Map data type is similar to a JSON object. There are no restrictions on the data types that can be stored in List or Map elements, and the elements do not have to be of the same type.
I don't believe it is possible to store an ordered list as an attribute, as DynamoDB only supports single-valued and (unordered) set attributes. However, the performance overhead of storing a string of comma-separated values (or some other separator scheme) is probably pretty minimal given the fact that all the attributes for row must together be under 64KB.
(source: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html)
Add a range attribute to your primary keys.
Composite Primary Key for Range Queries
A composite primary key enables you to specify two attributes in a table that collectively form a unique primary index. All items in the table must have both attributes. One serves as a “hash partition attribute” and the other as a “range attribute.” For example, you might have a “Status Updates” table with a composite primary key composed of “UserID” (hash attribute, used to partition the workload across multiple servers) and a “Time” (range attribute). You could then run a query to fetch either: 1) a particular item uniquely identified by the combination of UserID and Time values; 2) all of the items for a particular hash “bucket” – in this case UserID; or 3) all of the items for a particular UserID within a particular time range. Range queries against “Time” are only supported when the UserID hash bucket is specified.

What exactly are hashtables?

What are they and how do they work?
Where are they used?
When should I (not) use them?
I've heard the word over and over again, yet I don't know its exact meaning.
What I heard is that they allow associative arrays by sending the array key through a hash function that converts it into an int and then uses a regular array. Am I right with that?
(Notice: This is not my homework; I go too school but they teach us only the BASICs in informatics)
Wikipedia seems to have a pretty nice answer to what they are.
You should use them when you want to look up values by some index.
As for when you shouldn't use them... when you don't want to look up values by some index (for example, if all you want to ever do is iterate over them.)
You've about got it. They're a very good way of mapping from arbitrary things (keys) to arbitrary things (values). The idea is that you apply a function (a hash function) that translates the key to an index into the array where you store the values; the hash function's speed is typically linear in the size of the key, which is great when key sizes are much smaller than the number of entries (i.e., the typical case).
The tricky bit is that hash functions are usually imperfect. (Perfect hash functions exist, but tend to be very specific to particular applications and particular datasets; they're hardly ever worthwhile.) There are two approaches to dealing with this, and each requires storing the key with the value: one (open addressing) is to use a pre-determined pattern to look onward from the location in the array with the hash for somewhere that is free, the other (chaining) is to store a linked list hanging off each entry in the array (so you do a linear lookup over what is hopefully a short list). The cases of production code where I've read the source code have all used chaining with dynamic rebuilding of the hash table when the load factor is excessive.
Good hash functions are one way functions that allow you to create a distributed value from any given input. Therefore, you will get somewhat unique values for each input value. They are also repeatable, such that any input will always generate the same output.
An example of a good hash function is SHA1 or SHA256.
Let's say that you have a database table of users. The columns are id, last_name, first_name, telephone_number, and address.
While any of these columns could have duplicates, let's assume that no rows are exactly the same.
In this case, id is simply a unique primary key of our making (a surrogate key). The id field doesn't actually contain any user data because we couldn't find a natural key that was unique for users, but we use the id field for building foreign key relationships with other tables.
We could look up the user record like this from our database:
SELECT * FROM users
WHERE last_name = 'Adams'
AND first_name = 'Marcus'
AND address = '1234 Main St'
AND telephone_number = '555-1212';
We have to search through 4 different columns, using 4 different indexes, to find my record.
However, you could create a new "hash" column, and store the hash value of all four columns combined.
String myHash = myHashFunction("Marcus" + "Adams" + "1234 Main St" + "555-1212");
You might get a hash value like AE32ABC31234CAD984EA8.
You store this hash value as a column in the database and index on that. You now only have to search one index.
SELECT * FROM users
WHERE hash_value = 'AE32ABC31234CAD984EA8';
Once we have the id for the requested user, we can use that value to look up related data in other tables.
The idea is that the hash function offloads work from the database server.
Collisions are not likely. If two users have the same hash, it's most likely that they have duplicate data.

Resources