I have a partitioned collection with about 400k documents in a particular partition. Ideally this would be more distributed, but I need to deal with all the documents in the same partition for transaction considerations. I have a query which includes the partition key and the document id, which returns quickly with 2.58 RUs of usage.
This query is dynamic and potentially could be constructed to have an IN clause to search for multiple document ids. As such I added an ORDER BY to ensure the results were in a consistent order, adding the clause however caused the RUs to skyrocket to almost 6000! Given that the WHERE clause should be filtering down the results to a handful before sorting, I was surprised by these results. It almost seems like it's applying the ORDER BY before the WHERE clause, which must not be correct. Is there something under the covers with the ORDER BY clause that would explain this behavior?
Example document:
{ "DocumentType": "InventoryRecord", (PartitionKey, String) "id": "7867f600-c011-85c0-80f2-c44d1cf09f36", (DocDB assigned GUID, stored as string) "ItemNumber": "123345", (String) "ItemName": "Item1" (String) }
With a Query looking like this:
SELECT * FROM c where c.DocumentType = 'InventoryRecord' and c.id = '7867f600-c011-85c0-80f2-c44d1cf09f36' order by c.ItemNumber
You should at least put a range index to ItemNumber. This should ensure, there is a ordering as expected. The addition in your indexing policy this would look like
{
"path": "/ItemNumber/?",
"indexes": [
{
"kind": "Range",
"dataType": "String",
"precision": -1
}
]
}
Related
I have a dynamodb table whose items have below structures.
{
"url": "some-url1",
"dependencies": [
"dependency-1",
"dependency-2",
"dependency-3",
"dependency-4"
],
"status": "active"
}
{
"url": "some-url2",
"dependencies": [
"dependency-2",
],
"status": "inactive"
}
{
"url": "some-url3",
"dependencies": [
"dependency-1",
],
"status": "active"
}
Here, url is defined as the partition key and there is no sort key.
The query which needs to run needs to find all the records with a specific dependency and status.
For example - find all the records for whom dependency-1 is present in dependencies list and whose status is active.
So for the above records, record 1st and 3rd should be returned.
Do I need to set GSI on dependencies or is this something which cannot be done in dynamodb ?
You cannot create a GSI on a nested value. You can however create a GSI on status but you would need to be careful as it has a low cardinality meaning you could limit your throughput to 1000 writes per second if all of your items being written to the table have the same status. Of course if you never intend to scale that high then it's no issue.
Your other option is to use a Scan where you read your entire data set and use a FilterExpression to filter based on dependency and status.
Depending on the SDK you use you can find some example operations here:
https://github.com/aws-samples/aws-dynamodb-examples/tree/master/DynamoDB-SDK-Examples
I can read from Microsoft documentation the RU is impacted due to document size as per the documentation. Is it the size of stored document or the retrieved document. I have a document with lot of entries under the nested level. If I retrieve only the property at level 1 will it reduce the RU?
For example the document is show below. Consider the association level has more than 15000 entries
{
"name": "hi",
"data":"demo",
"associations": [
{
"name": "assoc1"
},
{
"name": "assoc2"
},
{
"name": "assoc3"
},
{
"name": "assoc4"
},
{
"name": "assoc5"
}
]
}
Wil there a difference in RU between the two mongo queries considering the document size is 500KB?
Query without projection:
db.getCollection("demo").find({"name":"hi"})
Query with projection:
db.getCollection("demo").find( {"name":"hi"} , {"data":true} )
I noticed a change in RU between this two query. But I didn't see this mentioned in the document I searched.
If the query engine needs to traverse a large document to project results then it will consume more RU/s than when it doesn't.
The bigger issue I think is a document with an array of more than 15K items. Unbounded or super large arrays is generally not a good pattern for Cosmos DB, especially if they have asymmetric update patterns because updates will replace the entire document.
This is my query:
db.requests
.where('userId', '==', uid)
.where('placeId', '==', placeId)
.where('endTime', '>=', Date.now())
.where('isFullfilled', '==', false);
So I manually wrote this index in firestore.indexes.json:
{
"collectionGroup": "requests",
"queryScope": "COLLECTION",
"fields": [
{
"fieldPath": "userId",
"order": "ASCENDING"
},
{
"fieldPath": "placeId",
"order": "ASCENDING"
},
{
"fieldPath": "endTime",
"order": "ASCENDING"
},
{
"fieldPath": "isFullfilled",
"order": "ASCENDING"
}
]
},
When run, I get an error "This query requires an index". And the automatically created index looks like this:
My manually created index on the other hand looks like this in GUI:
Why does it not accept my own index? Does the order of fields matter? I am not ordering query results. Is there any kind of pattern to index creation? This is really confusing and I can't find anything on this in the docs. It's really annoying to have to run every query against the cloud database to get the proper composite index field order.
Does the order of fields matter?
Yes, the order of the fields very much matters to the resulting index. I pretty much see such an index as:
Firestore creates a composite value for each document by concatenating the value of the fields in that index.
For a query it then can only query if the fields and order exactly match (with some exceptions for subsets of fields, and cases where it can do a zig-zag-merge-join).
For such a query Firestore finds the first entry in the index that matches the conditions. From there it then returns contiguous results (sometimes called a slice). It does not skip any documents, nor jump to another point in the index, nor reverse the index.
The field that you order on, or do a range query on (your >=) must be last in the index.
Note that this is probably not how it really works, but the model holds up pretty well - and is in fact how we recommend implementing multi-field filtering on Firebase's other database as explained here: Query based on multiple where clauses in Firebase
We are experiencing an issue in when writing queries for Cosmos Document DB and we want to create a new document property and use it in an ORDER BY clause
If, for example, we had a set of documents like:
{
"Name": "Geoff",
"Company": "Acme"
},
{
"Name": "Bob",
"Company": "Bob Inc"
}
...and we write a query like SELECT * FROM c ORDER BY c.Name this works fine and returns both documents
However, if we were to add a new document with an additional property:
{
"Name": "Geoff",
"Company": "Acme"
},
{
"Name": "Bob",
"Company": "Bob Inc"
},
{
"Name": "Sarah",
"Company": "My Company Ltd",
"Title": "President"
}
...and we write a query like SELECT * FROM c ORDER BY c.Title it will only return the document for Sarah and excludes the 2 without a Title property.
This means that the ORDER BY clause is behaving like a filter rather than just a sort, which seems unexpected.
It seems that all document schemas are likely to add properties over time. Unless we go back and add these properties to all existing document records in the container then we can never use them in an ORDER BY clause without excluding records.
Does anyone have a solution to allow the ORDER BY to only effect the Sort order of the result set?
Currently, ORDER BY works off of indexed properties, and missing values are not included in the result of a query using ORDER BY.
As a workaround, you could do two queries and combine the results:
The current query you're doing, with ORDER BY, returning all documents containing the Title property, ordered
A second query, returning all documents that don't have Title defined.
The second query would look something like:
SELECT * FROM c
WHERE NOT IS_DEFINED(c.Title)
Also note that, according to this note within the EF Core repo issue list, behavior is a bit different when using compound indexes (where documents with missing properties are returned).
Why cant I get consistent reads for global-secondary-indexes?
I have the following setup:
The table: tblUsers (id as hash)
Global Secondary Index: tblUsersEmailIndex (email as hash, id as attribute)
Global Secondary Index: tblUsersUsernameIndex (username as hash, id as attribute)
I query the indexes to check if a given email or username is present, so I dont create a duplicate user.
Now, the problem is I cant do consistent reads for queries on the indexes. But why not? This is one of the few occasions I actually need up-to-date data.
According to AWS documentation:
Queries on global secondary indexes support eventual consistency only.
Changes to the table data are propagated to the global secondary indexes within a fraction of a second, under normal conditions. However, in some unlikely failure scenarios, longer propagation delays might occur. Because of this, your applications need to anticipate and handle situations where a query on a global secondary index returns results that are not up-to-date.
But how do i handle this situation? How can I make sure that a given email or username is not already present in the db?
You probably already went through this:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
The short answer is that you cannot do what you want to do with Global Secondary Indexes (ie it's always eventual consistency).
A solution here would be to have a separate table w/ the attribute you're interested in as a key and do consistent reads there. You would need to ensure you are updating that whenever you are inserting new entities, and you would also have to worry about the edge case in which inserts there succeed, but not in the main table (ie you need to ensure they are in sync)
Another solution would be to scan the whole table, but that would probably be overkill if the table is large.
Why do you care if somebody creates 2 accounts with the same email? You could just use the username as the primary hash key and just not enforce the email uniqueness.
When you try to use putItem, you have a ConditionExpression to use to check if the condition is satisfied to put the item, which means you can check if the email or username exists.
ConditionExpression — (String)
A condition that must be satisfied in order for a conditional PutItem operation to succeed.
An expression can contain any of the following:
Functions: attribute_exists | attribute_not_exists | attribute_type | contains | begins_with | size
These function names are case-sensitive.
Comparison operators: = | <> | < | > | <= | >= | BETWEEN | IN
Logical operators: AND | OR | NOT
For more information on condition expressions, see Condition Expressions in the Amazon DynamoDB Developer Guide.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB.html#putItem-property
I ran across this recently and wanted to share an update. In 2018, DynamoDB added transactions. If you really need to keep two items (either in the same or different tables) in 100% sync with no eventual consistency to worry about, TransactWriteItems and TransactGetItems are what you need.
It's better to avoid the transaction altogether, if you can, as others have suggested.
you can't have strongly consistent read on GSI.
What you can do is
Model your schema to have 2 rows, e.g:-
user#uId as pk.
email#emailId as pk.
make pk as of type string
Depending on your situation and considering all of the alternatives, it may be acceptable to add an automatic retry when you don't find anything on the GSI the first time to work around the lack of strongly consistent reads. I didn't even think of this until I hit road blocks with other options and then realized this was simple and didn't cause any issues for our particular use case.
{
"TableName": "tokens",
"ProvisionedThroughput": { "ReadCapacityUnits": 5, "WriteCapacityUnits": 5 },
"AttributeDefinitions": [
{ "AttributeName": "data", "AttributeType": "S" },
{ "AttributeName": "type", "AttributeType": "S" },
{ "AttributeName": "token", "AttributeType": "S" }
],
"KeySchema": [
{ "AttributeName": "data", "KeyType": "HASH" },
{ "AttributeName": "type", "KeyType": "RANGE" }
],
"GlobalSecondaryIndexes": [
{
"IndexName": "tokens-token",
"KeySchema": [
{ "AttributeName": "token", "KeyType": "HASH" }
],
"Projection": {
"ProjectionType": "ALL"
},
"ProvisionedThroughput": { "ReadCapacityUnits": 2, "WriteCapacityUnits": 2 }
}
],
"SSESpecification": {"Enabled": true }
}
public async getByToken(token: string): Promise<TokenResponse> {
let tokenResponse: TokenResponse;
let tries = 1;
while (tries <= 2) { // Can't perform strongly consistent read on GSI so we have to do this to insure the token doesn't exist
let item = await this.getItemByToken(token);
if (item) return new TokenResponse(item);
if (tries == 1) await this.sleep(1000);
tries++;
}
return tokenResponse;
}
Since we don't care about performance for someone sending in a non-existent token (which should never happen anyway), we work around the problem without taking any performance hit (other than a possible 1 second delay one time after the token is created). If you just created the token, you wouldn't need to resolve it back to the data you just passed in. But if you happen to do that, we handle it transparently.