Auto-increment integer in dynamodb - amazon-dynamodb

I'm modeling a dynamodb diagram for an invoice app and I'm looking for generate the unique invoice id that need to be incremented from 1 to X. There is in 2019 a solution about this kind of problem with aws appsync and dynamodb as datasource ?

Auto-incrementing integers are not a recommended pattern in DynamoDB although it is possible to implement something similar using application level logic. A DynamoDB table is distributed to many logical partitions according to the table's partition key. Items are then sorted within that partition according to their sort key. You will need to decide what structure makes sense for you app and what an auto-incrementing means for your app. The simplest case would be to omit a sort key and treat the auto-incremented id as the partition key which would guarantee its uniqueness but also has implications that every row lives in its own partition and thus listing all invoices would have to be a Scan and thus does not preserve order which may or may not make sense for your app.
As mentioned in this SO post (How to use auto increment for primary key id in dynamodb) you can use code like this:
const params = {
TableName: 'CounterTable',
Key: { HashKey : 'auto-incrementing-counter' },
UpdateExpression: 'ADD #a :x',
ExpressionAttributeNames: {'#a' : "counter_value"},
ExpressionAttributeValues: {':x' : 1},
ReturnValues: "UPDATED_NEW" // ensures you get value back the new key
};
new AWS.DynamoDB.DocumentClient().update(params, function(err, data) {});
to atomically increment the integer stored in the CounterTable row designated by the partition key "auto-incrementing-counter". After the atomic increment, you can use the returned id to create the new Invoice.
You can implement this pattern using DynamoDB & AppSync but the first thing to decide is if it suits your use case. You may also be interested in the RDS integration via the RDS data API which would have more native support for auto-incrementing IDs but would lose out on the set it and forget it scaling of DynamoDB.

Related

Fetch last item of the aws dynamodb table

So I wanted to fetch the last item/row of my dynamodb table but i am not finding resources. My primary key is id having series of incremented numbers such as 1,2,3... for each row respectively.
This is my function.
async function readMessage(){
const params = {
TableName: table,
};
return dynamo.getItem(params).promise();
}
I am not sure as to what i should be adding in my params.
DynamoDB has two types of primary keys:
Partition key – A simple primary key, composed of one attribute known as the partition key.
Partition key and sort key – Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
When fetching an item by partition key, you need to specify the exact partition key. You cannot fetch the max/min partition key.
Instead, you may want to create a sort key with a timestamp (or the ID if it's a sequential number) and use the sort key to fetch the last item.
Check out the AWS docs on Choosing the Right Partition Key for more info.
The proper way to design a table in DynamoDB is based on its expected access patterns; if this is something you need perhaps you should consider using this id as Sort Key instead of Primary Key and then query the table in descending order while also limiting the amount of items to 1.
If instead you don't want to change the schema of your items and you don't care about making at least two operations to do this you have two, not optimal options:
If none of your items ever gets deleted, just make a count first and use that information to know what's the latest item that was written.
Alternatively, if you could consider keeping a "special" record in your DynamoDB table that is basically a count that gets always incremented/written when one of your "other" items gets written. Upon retrieval you first retrieve the value of this special record and use this info to retrieve the actual one.
The combination of the partition key and sort key, makes the primary key of your item in the dynamoDB, so their combination must be unique, otherwise the item will be overwritten.
In almost all my use-cases, I select the primary key as an object attribute, like the brand, an email or a class and then, for the sort key I select the TimeStamp. So in this way, you always know the partition key, we need it to retrieve the values and then you can query your dynamoDB by making some filters by the sort key. For more extensive examples using Python, check the AWS page: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.04.html, where it shows, how you can query your DynamoDB items.
There is also other ways to define the keys in your Dynamo and for that I advise you to check https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-sort-keys.html

How to future proof these possible requirement changes (swaping primary key columns) with a dynamodb table design?

I have the following data structure
item_id String
version String
_id String
data String
_id is simply a UUID to identify the item. There is no need to search for a row by this field yet.
As of now, item_id, an id generated by an external system, is the a primary key. i.e. Given the item_id, I want to be able retrieve version, _id and data from the dynamodb table.
item_id -> (version, _id, data)
Therefore I am setting item_id as the partition key.
I have two questions for future-proofing (evolution of) the above "schema":
In the future, if I want to incorporate version (version number of the item) into the primary key, can I just modify the table and add it to be the partition key?
If I also want to make the data searchable by _id, is it feasible modify the table to assign _id to be the partition key (It is a unique value because it is a UUID) and reassign item_id to be a search key?
I want to avoid creation of new dynamodb table and data migration to create new key structures, because it may lead to down time.
You cannot update primary keys in DynamoDB. From the docs:
You cannot use UpdateItem to update any primary key attributes. Instead, you will need to delete the item, and then use PutItem to create a new item with new attributes.
If you wanted to make data searchable by _id, you could introduce a secondary index with the _id field as the partition key of the index.
For example, let's say your data looked like this:
If you defined a secondary index on _id, the index would look like this (same data as the previous example, just a different logical view):
DynamoDB doesn't currently have any native versioning functionality, so you'll have to incorporate that into your data model. Fortunately, there's lots of discussion about this use case on the web. AWS has a document of DynamoDB "Best Practices", including an example of versioning.

Choosing Primary key for DynamoDB

A bit of context: I am trying to build an inventory to list my AWS resources in various accounts and I am planning to use DynamoDB to store the data. These will be the columns for my table: ResourceARN, ResourceName, ResourceType, StandardTag, IsDeleted, LastUpdateTime and ResourceCreationDate ( this field is available only for a few resource types like Ec2).
Question: I want to query my DDB table using account ID, resource type and tag name. I am stumped on choosing the primary key for the table. Since primary key should be unique and has to have 1:many relationship. Hence, I cannot use a combination of resourceType and account Id. Nor can I use resourceArn as my primary key since it is 1:1 relationship. Also, using the resourceARN as the sort key does not make sense to me. I understand that I can use a simple scan operation, but that is very costly and will take time if I add more data in my DDB.
I would appreciate any suggestions or guidance over the same.
Short answer
Partition key: Account ID
Sort key: <resource type>/<resource ID>
Rationale
It's a common pattern for a sort key to be a string concatenating multiple attributes. Since sort keys can be queried by prefix, you can leverage this in your queries:
Get all account resources: query all sort keys on the Account ID partition key
Get all EC2 instances of an account: query with partition key = <your account ID> and sort key begins_with('ec2-instance').
You may notice that ARNs follow such a hierarchy as well (what's probably not a coincidence). This would be effectively using a subset of the ARN as the sort key.
Some notes:
DynamoDB is about attributes as much as about columns. You don't need to include ResourceCreationDate in the records which don't have it, and doing so will save you space (see next point).
Attribute names count as storage for every record, which impacts cost and also throughput. It's common to use shorthand for names for this reason (rct instead of ResourceCreationTime for example).
You can use LSIs (Local Secondary Indexes) to order by creation and update times if you need this.

DynamoDB sub item filter using .Net Core API

First of all, I have table structure like this,
Users:{
UserId
Name
Email
SubTable1:[{
Column-111
Column-112
},
{
Column-121
Column-122
}]
SubTable2:[{
Column-211
Column-212
},
{
Column-221
Column-222
}]
}
As I am new to DynamoDB, so I have couple of questions regarding this as follows:
1. Can I create structure like this?
2. Can we set primary key for subtables?
3. Luckily, I found DynamoDB helper class to do some operations into my DB.
https://www.gopiportal.in/2018/12/aws-dynamodb-helper-class-c-and-net-core.html
But, don't know how to fetch only perticular subtable
4. Can we fetch only specific columns from my main table? Also need suggestion for subtables
Note: I am using .net core c# language to communicate with DynamoDB.
Can I create structure like this?
Yes
Can we set primary key for subtables?
No, hash key can be set on top level scalar attributes only (String, Number etc.)
Luckily, I found DynamoDB helper class to do some operations into my DB.
https://www.gopiportal.in/2018/12/aws-dynamodb-helper-class-c-and-net-core.html
But, don't know how to fetch only perticular subtable
When you say subtables, I assume that you are referring to Array datatype in the above sample table. In order to fetch the data from DynamoDB table, you need hash key to use Query API. If you don't have hash key, you can use Scan API which scans the entire table. The Scan API is a costly operation.
GSI (Global Secondary Index) can be created to avoid scan operation. However, it can be created on scalar attributes only. GSI can't be created on Array attribute.
Other option is to redesign the table accordingly to match your Query Access Pattern.
Can we fetch only specific columns from my main table? Also need suggestion for subtables
Yes, you can fetch specific columns using ProjectionExpression. This way you get only the required attributes in the result set

Should I always create my DynamoDB tables using hash and range primary key type?

In the docs (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html) it states:
You can query only tables whose primary key is of hash-and-range type
and
we recommend that you design your applications so that you can use the Query operation mostly, and use Scan only where appropriate
It's not directly stated, but does this make it best practice to use hash-and-range primary keys?
EDIT:
Answer TL;DR: Use whichever primary key type that makes sense for your data model and use secondary indexes for better querying support.
References:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
http://www.allthingsdistributed.com/2013/12/dynamodb-global-secondary-indexes.html
https://forums.aws.amazon.com/thread.jspa?messageID=604862
In what situation do you use Simple Hash Keys on DynamoDB?
The choice of which key to use comes down to your Use Cases and Data Requirements for a particular scenario. For example, if you are storing User Session Data it might not make much sense using the Range Key since each record could be referenced by a GUID and accessed directly with no grouping requirements. In general terms once you know the Session Id you just get the specific item querying by the key. Another example could be storing User Account or Profile data, each user has his own and you most likely will access it directly (by User Id or something else).
However, if you are storing Order Items then the Range Key makes much more sense since you probably want to retrieve the items grouped by their Order.
In terms of the Data Model, the Hash Key allows you to uniquely identify a record from your table, and the Range Key can be optionally used to group and sort several records that are usually retrieved together. Example: If you are defining an Aggregate to store Order Items, the Order Id could be your Hash Key, and the OrderItemId the Range Key. Whenever you would like to search the Order Items from a particular Order, you just query by the Hash Key (Order Id), and you will get all your order items.
You can find below a formal definition for the use of these two keys:
"Composite Hash Key with Range Key allows the developer to create a
primary key that is the composite of two attributes, a 'hash
attribute' and a 'range attribute.' When querying against a composite
key, the hash attribute needs to be uniquely matched but a range
operation can be specified for the range attribute: e.g. all orders
from Werner in the past 24 hours, or all games played by an individual
player in the past 24 hours." [VOGELS]
So the Range Key adds a grouping capability to the Data Model, however, the use of these two keys also have an implication on the Storage Model:
"Dynamo uses consistent hashing to partition its key space across its
replicas and to ensure uniform load distribution. A uniform key
distribution can help us achieve uniform load distribution assuming
the access distribution of keys is not highly skewed."
[DDB-SOSP2007]
Not only the Hash Key allows to uniquely identify the record, but also is the mechanism to ensure load distribution. The Range Key (when used) helps to indicate the records that will be mostly retrieved together, therefore, the storage can also be optimized for such need.
Choosing the correct keys to represent your data is one of the most critical aspects during your design process, and it directly impacts how much your application will perform, scale and cost.
Footnotes:
The Data Model is the model through which we perceive and manipulate our data. It describes how we interact with the data in the database [FOWLER]. In other words, it is how you abstract your data model, the way you group your entities, the attributes that you choose as primary keys, etc
The Storage Model describes how the database stores and manipulates the data internally [FOWLER]. Although you cannot control this directly, you can certainly optimize how the data is retrieved or written by knowing how the database works internally.
Not necessarily. It is best to choose a primary key that supports the access patterns for your use case.
For example, let's say you want to have a table for Users. You will store the details for a single user (name, email, creator, etc.). Your access pattern might be that you are fetching the details for a specific User. In this case it makes more sense to use a primary key of type hash, with a hash key of userId.
Let's say you also want another table that stores Groups. Your access pattern might be that you want to get all members for a given group. Here, it makes more sense to use a primary key of type hash and range, with your hash and range keys respectively being groupId and userId.
The important things to know are the differences between both key types (quote below) and the Guidelines for Working with Tables:
Hash Type Primary Key—The primary key is made of one attribute, a hash attribute. DynamoDB builds an unordered hash index on this
primary key attribute. Each item in the table is uniquely identified
by its hash key value.
Hash and Range Type Primary Key—The primary key is made of two attributes. The first attribute is the hash attribute and the second
one is the range attribute. DynamoDB builds an unordered hash index
on the hash primary key attribute, and a sorted range index on the
range primary key attribute. Each item in the table is uniquely
identified by the combination of its hash and range key values. It is
possible for two items to have the same hash key value, but those two
items must have different range key values.
You can see more about best practices in the Dynamo DB Guidelines for Working with Tables documentation
As others have already said - no you should not.
The statement that confused and caused you to ask this question in the first place is wrong:
You can query only tables whose primary key is of hash-and-range type
You can query tables whose primary key is of single-attribute (only partition) type.
Proof:
# Create single-attribute primary key table
aws dynamodb create-table --table-name testdb6 --attribute-definitions '[{"AttributeName": "Id", "AttributeType": "S"}]' --key-schema '[{"AttributeName": "Id", "KeyType": "HASH"}]' --provisioned-throughput '{"ReadCapacityUnits": 5, "WriteCapacityUnits": 5}'
# Populate table
aws dynamodb put-item --table-name testdb6 --item '{ "Id": {"S": "1"}, "LastName": {"S": "Lopez"}, "FirstName": {"S": "Maria"}}'
aws dynamodb put-item --table-name testdb6 --item '{ "Id": {"S": "2"}, "LastName": {"S": "Fernandez"}, "FirstName": {"S": "Augusto"}}'
# Query table using only partition attribute
aws dynamodb query --table-name testdb6 --select ALL_ATTRIBUTES --key-conditions '{"Id": {"AttributeValueList": [{"S": "1"}], "ComparisonOperator": "EQ"}}'
Output of the last command (it works):
{
"Count": 1,
"Items": [
{
"LastName": {
"S": "Lopez"
},
"Id": {
"S": "1"
},
"FirstName": {
"S": "Maria"
}
}
],
"ScannedCount": 1,
"ConsumedCapacity": null
}

Resources