I have a DynamoDB table with animals and I'm interacting with it using Dynamoose. My table has a 'UserId' attribute, that indicates the user that registered that animal. I want to write a query that finds all the animals registered by the same user, i.e., gets all the items that have the attribute 'UserId' matching the input string.
I'm trying to use Dynamoose's queries like this MyModel.query('UserId').eq(user.id).using('UserId-index').exec();, but it always gives this error Index can't be found for query. I imagine that this is caused because it is not finding the index for the attribute 'UserId', but I have an index 'UserId-index' on my table.
I also tried specifying the index that should be used on the query with the using() method, like this MyModel.query('UserId').eq(user.id).using('UserId-index').exec();, but it gave me this other error: Either the KeyConditions or KeyConditionExpression parameter must be specified in the request, which I don't get at all.
Note that I don't wanna use scan(), as the official documentation highly encourages the developers to use query() instead.
Related
I've been reading a DynamoDB docs and was unable to understand if it does make sense to query on Global Secondary Index with a usage of 'contains' operator.
My problem is as follows: my dynamoDB document has a list of embedded objects, every object has a 'code' field which is unique:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
]
}
I want to be able to get all documents that contain entities with entity.code = X.
For this purpose I'm considering adding a Global Secondary Index that would contain all entity.codes that are present in current db document separated by a comma. So the example above would look like:
{
"entities":[
{"code":"entity1Code", "name":"entity1Name"},
{"code":"entity2Code", "name":"entity2Name"}
],
"entitiesGlobalSecondaryIndex":"entityCode1,entityCode2"
}
And then I would like to apply filter expression on entitiesGlobalSecondaryIndex something like: entitiesGlobalSecondaryIndex contains entityCode1.
Would this be efficient or using global secondary index does not make sense in this way and DynamoDB will simply check the condition against every document which is similar so scan?
Any help is very appreciated,
Thanks
The contains operator of a query cannot be run on a partition Key. In order for a query to use any sort of operators (contains, begins with, > < ect...) you must have a range attributes- aka your Sort Key.
You can very well set up a GSI with some value as your PK and this code as your SK. However, GSIs are replication of the table - there is a slight potential for the data ina GSI to lag behind that of the master copy. If the query you're doing against this GSI isn't very often, then you're probably safe from that.
However. If you are trying to do this to the entire table at once then it's no better than a scan.
If what you need is a specific Code to return all its documents at once, then you could do a GSI with that as the PK. If you add a date field as the SK of this GSI it would even be time sorted. If you query against that code in that index, you'll get every single one of them.
Since you may have multiple codes, if they aren't too many per document, you maybe could use a Sparse Index - if you have an entity with code "AAAA" then you also have an attribute named AAAA (or AAAAflag or something.) It is always null/does not exist Unless the entities contains that code. If you do a GSI on this AAAflag attribute, it will only contain documents that contain that entity code, and ignore all where this attribute does not exist on a given document. This may work for you if you can also provide a good PK on this to keep the numbers well partitioned and if you don't have too many codes.
Filter expressions by the way are different than all of the above. Filter expressions are run on tbe data that would be returned, after it is already read out of the table. This is useful I'd you have a multi access pattern setup, but don't want a particular call to get all the documents associated with a particular PK - in the interests of keeping the data your code is working with concise. The query with a filter expression still retrieves everything from that query, but only presents what makes it past the filter.
If are only querying against a particular PK at any given time and you want to know if it contains any entities of x, then a Filter expressions would work perfectly. Of course, this is only per PK and not for your entire table.
If all you need is numbers, then you could do a count attribute on the document, or a meta document on that partition that contains these values and could be queried directly.
Lastly, and I have no idea if this would work or not, if your entities attribute is a map type you might very well be able to filter against entities code - and maybe even with entities.code.contains(value) if it was an SK - but I do not know if this is possible or not
Folks,
Given we have to store the following shopping cart data:
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
We need to run the following queries:
Give me all items (which is a list) for a specific user (easy).
Give me all users which have itemID3 (precisely my question).
How would you model this in DynamoDB?
Option 1, only have the Hash key? ie
HashKey(users) cartItems
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
Option 2, Hash and Range keys?
HashKey(users) RangeKey(cartItems)
userID1 ['itemID1','itemID2','itemID3']
userID2 ['itemID3','itemID2','itemID7']
userID3 ['itemID3','itemID2','itemID1']
But it seems that range keys can only be strings, numbers, or binary...
Should this be solved by having 2 tables? How would you model them?
Thanks!
Rule 1: The range keys in DynamoDB table must be scalar, and that's why the type must be strings, numbers, boolean or binaries. You can't take a list, set, or a map type.
Rule 2: You cannot (currently) create a secondary index off of a nested attribute. From the Improving Data Access with Secondary Indexes in DynamoDB documentation. That means, you can not index the cartItems since it's not a top level JSON attribute. You may need another table for this.
So, the simple answer to your question is another question: how do you use your data?
If you query the users with input item (say itemID3 in your case) infrequently, perhaps a Scan operation with filter expression will work just fine. To model your data, you may use the user id as the HASH key and cartItems as the string set (SS type) attribute. For queries, you need to provide a filter expression for the Scan operation like this:
contains(cartItems, :expectedItem)
and, provide the value itemID3 for the placeholder :expectedItem in parameter valueMap.
If you run many queries like this frequently, perhaps you can create another table taking the item id as the HASH key, and set of users having that item as the string set attribute. In this case, the 2nd query in your question turns out to be the 1st query in the other table.
Be aware of that, you need to maintain the data at two tables for each CRUD action, which may be trivial with DynamoDB Streams.
I have a few objects created on my database and I need to delete some of the repeating attributes related to them.
The query I'm trying to run is:
UPDATE gemp1_product objects REMOVE ingredients[1] WHERE (r_object_id = '08015abd8002cd68')
But all I get is the folloing error message:
Error querying databse.
[DM_QUERY_E_UPDATE_INDEX]error: "UPDATE: Unable to REMOVE tghe attribute ingredients at index 1."
[DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR]warning: "attempt to delete
non-existent attribute 88"
Object 08015abd8002cd68 exists and I can see it on the database. Queries like SELECT and DELETE work fine but I do not want to delete the whole object.
There is no easy way to do this. The reason is that repeating attributes are ordered, to enable multiple repeating attributes to be synchronized for a given object.
Either
set the attribute value to be empty for the given position, and change your code to discard empty attributes, or
use multiple DQL statements to shuffle the order so that the last one becomes empty, or
change your data model, e.g. use a single attribute as a property bag with pre-defined delimiters.
Details (1)
UPDATE gemp1_product OBJECTS SET ingredients[1] = '' WHERE ...
Details (2)
For each index; first find the value of index+1:
SELECT ingredients
FROM gemp1_product
WHERE (i_position*-1)-1 = <index+1>
ENABLE (ROW_BASED)
Use the value in a new query:
UPDATE gemp1_product OBJECTS SET ingredients[1] = '<value_from_above>' WHERE ...
It should also be possible to do this by nesting DQL somehow, but it might not be worth the effort.
Something is either wrong with your query or with your repository. I think you are mistyping your attribute name or using wrong index in your UPDATE query.
If you google for DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR you'll see on this link a bit more detailed explanation:
CAUSE: Program executed a DeleteAttr operation that specified an non-existent attribute position (either a negative number or a number larger than the number of attributes in the object).
From this you could guess that type isn't in consistent state, or that you are trying to remove too big index of your repeating attribute, etc. Did you checked your repository with Consistency checker Job and other similar Jobs?
As of for the removing of repeating property (sttribute) value with DQL query, this is unachievable with single query since you need to specify index position which you don't know at first. But writing a simple script or doing it manually if it's not big amount of values to delete is the way you want to go.
I'm concerned about read performance, I want to know if putting an indexed field value as null is faster than giving it a value.
I have lots of items with a status field. The status can be, "pending", "invalid", "banned", etc...
my typical request is to find the status "ok" (or null). Since null fields are not saved to datastore, it is already a win to avoid to have a "useless" default value I can replace with null. So I already have less disk space use.
But I was wondering, since datastore is noSql, it doesn't know about the data structure and it doesn't know there is a missing column status. So how does it do the status = null request check?
Does it have to check all columns of each row trying to find my column? or is there some smarter mechanism?
For example, index (null=Entity,key) when we pass a column explicitly saying it is null (if this is the case, does Objectify respect that and keep the field in the list when passing it to the native API if it's null?)
And mainly, which request is more efficient?
The low level API (and Objectify) stores and indexes nulls if you specify that a field/property should be indexed. For Objectify, you can specify #Ignore(IfNull.class) or #Unindex(IfNull.class) if you want to alter this behavior. You are probably confusing this with documentation for other data access APIs.
Since GAE only allows you to query for indexed fields, your question is really: Is it better to index nulls and query for them, or to query for everything and filter out non-null values?
This is purely a question of sparsity. If the overwhelming majority of your records contain null values, then you're probably better off querying for everything and filtering out the ones you don't want manually. A handful of extra entity reads are probably cheaper than updating and storing an extra index. On the other hand, if null records are a small percentage of your data, then you will certainly want the index.
This indexing dilema is not unique to GAE. All databases present this question with respect to low-cardinality fields; it's just that they'll do the table scan (testing and skipping rows) for you.
If you really want to fine-tune this behavior, read Objectify's documentation on Partial Indexes.
null is also treated as a value in datastore and there will be entries for null values in indexes. Datastore doc says, "Datastore distinguishes between an entity that does not possess a property and one that possesses the property with a null value"
Datastore will never check all columns or all records. If you have this property indexed, it will get records from the index only If not indexed, you cannot query by that property.
In terms of query performance, it should be the same, but you can always profile and check.
I am trying to perform queries using the OR operator as following:
MapReduceResult result = riakClient.
mapReduce("some_bucket", "Name:c1 OR c2").
addMapPhase(new NamedJSFunction("Riak.mapValuesJson"), true).
execute();
I only get the 1st object in the query (where name='c1').
If I change the order of the query (i.e. Name:c2 OR c1) again I get only the first object in query (where name='c2').
is the OR operator (and other query operators) supported in the java client?
I got this answer from Basho engeneer, Sean C.:
You either need to group the terms or qualify both of them. Without a field identifier, the search query assumes that the default field is being searched. You can determine how the query will be interpreted by using the 'search-cmd explain' command. Here's two alternate ways to express your query:
Name:c1 OR Name:c2
Name:(c1 OR c2)
both options worked for me!