DynamoBD/Amplify non-negative field and field validation on mutations - amazon-dynamodb

I am new to AWS in general, I am building a relatively simple application with Amplify, but I've used Google Firebase before. My question is: Is there a way to set a constrain for a field to be non-negative? I have an application that does transactions and I don't want my balance to be negative. I just need a simple error/exception. Is it possible to set a field constraint in DynamoDB that says "This field should be >= 0"?.
I also checked if it was possible to do it in the VTL amplify generated resolver of my graphql mutation, and indeed it is possible to set some constraints, But somehow it allows the operation and crashes on the next one (when the balance on the DB is already < 0, like if it checks it before the update). I tried saying something like "current_balance - transaction >= 0" but I couldn't get it to work.
So it seems that the only way is to create a custom lambda resolver that does the various checks before submitting the mutation to DynamoDB. I haven't tried it yet but I don't understand how I can do a check on the current balance (stored in the DB) without doing a query.
More in general is it even possible to validate fields (even with simple assertions like non-negative) on amplify/dynamoDB? Moving to another DB like Aurora would help?
Thanks for you help

DynamoDb supports conditional updates which allow an update to be applied when the given condition is met. You can set the condition current_balance >= cost for your update.
However, the negative balance is not the main problem. What you should address is how to prevent other requests from updating the same current_balance at the same time, or in short, race conditions on current_balance. In order to deal with that, you also need a conditional update whose condition is "current_balance = initial_balance". The initial_balance is, I guess, what you get from DynamoDB at the very beginning of the purchase process.
Sample VTL code
#set( $remaining_balance = $initial_balance - $transaction_cost )
#if( $remaining_balance < 0 )
$util.error("Insufficient balance")
#end
{
"version" : "2018-05-29",
"operation" : "UpdateItem",
"key": { <your-dynamodb-key> },
"update" : {
"expression" : "SET current_balance = :remaining_balance",
"expressionValues" : {
":remaining_balance" : $util.dynamodb.toNumberJson($remaining_balance)
}
},
"condition": {
"expression": "current_balance = :initial_balance",
"expressionValues" : {
":initial_balance" : $util.dynamodb.toNumberJson($initial_balance)
}
}
}

Related

Querying wordpress with meta queries from Gatsby

I'm trying to fetch data from my Wordpress backend using a meta query. I'm using this plugin:
https://www.wpgraphql.com/extenstion-plugins/wpgraphql-meta-query/
I can run my query in GraphiQL IDE in Wordpress, but not in Gatsbys GraphiQL tool.
I get this error:
Unknown argument "where" on field "Query.allWpPage"
Query:
query test {
allWpPage(
where: {metaQuery: {
relation: OR,
metaArray: [
{
key: "some_value",
value: null,
compare: EQUAL_TO
},
{key: "some_value",
value: "536",
compare: EQUAL_TO
}
]
}}
) {
edges {
node {
id
uri
}
}
}
}
I've tried deleting the cache directory and rebuilding, didn't help.
And just to clarify, I have no problems running other queries and getting ACL-data and what not. The only problem I have (right now) is exposing the where argument to Gatsby.
where filter is restricted in Gatsby. Here you have a detailed list of comparators, but they are:
eq (equals)
ne (not equals)
in (includes)
nin (not includes)
lt, lte, gt, gte (less than, equal or less than, greater than, equal or greater than respectively)
regex, glob (regular expression)
elemMatch (element matches)
On the other hand, there is a list of filters available. In your case, filter is what you are looking for. Your final query should look like:
query test {
allWpPage(
filter : {uri : {ne : "" }}
) {
edges {
node {
id
uri
}
}
}
}
Of course, adapt the filter to your needs. elemMatch should work for you either.
You will need to add each condition for each property of the object you're trying to match.
Why is where restricted?
Because it belonged to Sift, a library that Gatsby was using to use MongoDB queries, where where is available. Since Gatsby 2.23.0 (June 2020) this library is not being used anymore. More details at History and Sift:
For a long time Gatsby used the Sift library through which you can use
MongoDB queries in JavaScript.
Unfortunately Sift did not align with how Gatsby used it and so a
custom system was written to slowly replace it. This system was called
“fast filters” and as of gatsby#2.23.0 (June 2020) the Sift library is
no longer used.

Queries in Realtime-database (using LimitToLast) are very very slow

I'm using RealTime-database(Firebase 7.3.2) and Unity.
When I'm using the LimitToLast() method the query takes a long time(1,5 to 2 minutes) to return a reponse.
But when I load the whole data or execute this query without the LimitToLast method this takes not a long time.
I want to ask if everyone has this problem during his development with realtime firebase database.
My database contains 1700 rooms.
this is the query :
var result = await FirebaseDatabase.DefaultInstance.GetReference("Rooms")
.OrderByChild("CreationDate").LimitToLast(10).GetValueAsync();
And that is the structur of rooms collection in database:
{
"Rooms" : {
"-Lp860kFH8TjdAsPpar1" : {
"CreationDate" : -14400,
"Title" : "Room 1",
...,
},
"-Lp860kFH8TjdAsPpbr2" : {
"CreationDate" : -14402,
"Title" : "Room 2",
...,
},
...
"-Lp860kFH8TjdAsPpar3" : {
"CreationDate" : -14404,
"Title" : "Room 1700",
...,
}
}
}
Are you sure you have indexing done in your Firebase Realtime Database Security Rules? If its not done, then the query is executed as follows:
1. Download all the data from the "Rooms" branch to the Unity client.
2. Sort the data according to your ordering criteria on the Unity client.
3. Discard all except the last 10 children in this sorted data.
I'm sure nobody would want to do that if you want to get just the last 10 children. The ordering and limiting to last 10 children should happen on the database server itself
which will ensure it to be fast enough to give you the result in milliseconds. For that, you'll have to index your data and then run your queries.

Firebase Rules: Read restriction for dynamic child nodes

I'm trying to implement a Firebase rules read restriction in a data model that has a few nested dynamic child nodes.
I have the following data model:
/groupMessages/<groupId>/<messageId>/
{
"senderId": "<senderId>",
"recipientId": "<recipientId>",
"body": "..."
}
groupId, messageId, senderId and recipientId are dynamic ids. I would like to attach a listener to the /groudId node to listen to new messages. At the same time I only want users to read the message where the senderId or recipientId matches a corresponding auth.token value.
Due to Firebase cascading rules, if I allow the read at the groupId level without restrictions, I can't deny them on the message level.
{
"rules": {
"groupMessages"
"$groupId": {
".read": "auth != null"
}
}
}
}
I also haven't found a way to restrict the read rule on the groupId level to check for sender/recipientId of a message.
Any suggestions greatly appreciated.
As you've found, security rules cannot be used to filter data. But they can be used to restrict what queries can be performed on the data.
For example, you can query for all messages where the current user is the sender with:
var query = ref.child("groupMessages").child(groupId).orderByChild("senderId").equalTo(uid);
And you can secure access to the group's messages to only allow this query with:
{
"rules": {
"groupMessages": {
"$groupId": {
".read": "auth.uid != null &&
query.orderByChild == 'senderId' &&
query.equalTo == auth.uid"
}
}
}
}
The query and rules now exactly match, so the security rules will allow the query, while they'd reject a broader read operation. For more on this, see query based rules in the Firebase documentation
You'll note that this only works for a single field. Firebase Database queries can only filter on a single field. While there are workarounds by combining multiple values into a single property, I don't think those apply to your scenario, since they only work for AND queries, where you seem to want an OR.
You also seem to want to query on /groupMessages instead of on messages for a specific group. That also isn't possible: Firebase Database orders/filters on a property that is at a fixed path under each child of the node where you run the query. You cannot query across two dynamic levels, as you seem to be trying. For more on this see: Firebase Query Double Nested and Firebase query if child of child contains a value.
The common solution for your problem is to create a list of IDs for each user, which contains just the IDs of all messages (and/or the groups) they have access to.
userGroups: {
uid1: {
groupId1: true,
groupId2: true
},
uid2: {
groupId2: true,
groupId3: true
}
}
With this additional data structure (which you can much more easily secure), each user can simply read the groups they have access to, and your code then reads/queries the messages in each group. If necessary you can add a similar structure for the messages themselves too.
Finally: this type of recursive loading is not nearly as inefficient as many developers initially think, since Firebase pipelines the requests over an existing connection.

DynamoDB how to Update a Map if an attribute exists, else silently ignore

I have a table called Products, whose Key is a Range : orgzviceid + productid. It has a map attribute called "checkout" and a quantity storing attribute called "prod_stk_qty_i_i".
Say initially, for a product with Product ID 34, total available quantity is 10. As soon as a Cart checkout happens, assuming the Checkout ID is 5, and it has checked 2 quantities out a product id 34, then the product's (for productid 34) "checkout" map entry and "prod_stk_qty_i_i" in DynamoDB would be something like this:
"checkout" : { "5" : 2 },
"prod_stk_qty_i_i" : 8
If another checkout happens for the same product (say 1 quantity), and if that checkout ID is 7, then the checkout ooks like this:
"checkout" : { "5" : 2, "7" : 1 },
"prod_stk_qty_i_i" : 7
If payment is made, the checkout entry is removed, and quantity is increased.
Now, my requirement is to periodically after some timeout (30 minutes), release the Product Quantities which have been checked out, but not released. I do this by
Increasing the Quantity by "checkout."'s value
Removing the checkout. map entry
It is important that this operation not fail even if this operation is attempted multiple times, (idempotent), so its necessary that it only update if the checkout.checkoutID field exists. If not, it should simply ignore.
I tried the following:
[
"UpdateItem",
[
{
"TableName": "Products",
"Key": {
"orgzviceid": {
"N": "3000161710"
},
"productid": {
"N": "11"
}
},
"UpdateExpression": "REMOVE #checkout.#checkoutID SET #prod_stk_qty_i_i = #prod_stk_qty_i_i + #checkout.#checkoutID",
"ExpressionAttributeNames": {
"#checkout": "checkout",
"#checkoutID": "235",
"#prod_stk_qty_i_i": "prod_stk_qty_i_i"
},
"ConditionExpression": "attribute_exists(#checkout.#checkoutID)",
"ReturnValues": "ALL_NEW"
}
]
]
However, it gives me an error in case the checkout entry is not found for checkout id 235. Note that I've written ConditionExpression to do the update only if attribute "condition.235" exists.
Error Logs:
com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException","message":"The
conditional request failed ..."
So, how do I write a query such that if the map entry exist, then do the above operation, other wise not fail?
Obviously, one bad hack is to first check in a GetItem query if the checkout entry exists for the provided CheckoutID, and then only do this, however, it just does not seem right
I believe your using conditional expressions incorrectly. The point of the conditional is to fail if certain criteria is not met. WHy do you have the conditional at all? Without the conditional it would just execute the update expression and if the item does not exist I would not expect you to get an error. Like querying for an item that does not exist. You should simply get an empty set back. Not an error.
Your approach will not work because you are mixing "Attribute" and "AttributeValue" together in your conditional expression. Let me explain:
"ConditionExpression": "attribute_exists(#checkout.#checkoutID)"
In your table, checkout is an attribute in dynamo db, whereas checkoutID is in no way related to the table schema. So for dynamo DB, checkoutID is part of the attribute's value and not the attribute itself.
Therefore, to having the condition that you do will not work.
A conditional expression for your use case would be something which says attribute checkout exists and it's value is . However, in order to do that, you'd need to pass the expected map which boils down to reading the record before updating.
I do think that reading the record, updating the value and persisting it should be the way to go ahead in this case (and is not necessarily a bad idea)
Do consider using some kind of optimistic locking in this case to prevent against dirty reads and writes.

Prevent double voting with AngularFire, Firebase

I'm running into trouble saving and retrieving values in Firebase using Angularfire v2 (http://angularfire.com/documentationv2.html , v2 is necessary because v1 seemed unable to order items by priority)
It's a simple vote-up-down app using Persona Login for authorization, the problem comes when the user votes on a 'story', I want to add his user id to 'story/users', but it seems the only way to do that without erasing all the other values is to use '$add'(push), which throws in a random key name which I have no idea how to query against.
So to prevent double voting I have to somehow ask if 'story/users' already has that user's id in there.
story---
users---
-J9LrtJwXnCI3ZYqsIz1: user-1
-J9Lriauhfdoaiuhfafd: user-2
Does anybody have any idea how to find out if user-1 is in 'story/users'?
Keep in mind Angularfire v2 has some major changes.
And if anybody has a better idea I'd be happy to hear it (but I'm really trying to avoid saving it as user-1/story-id, for other reasons)
Store the records by the user's ID, rather than a random push id. For example, you could create a votes path, and keep track of who has voted there:
/votes/user-1
/votes/user-2
Then you can write a security rule to prevent double voting as follows:
{
"rules": {
"votes": {
"$record_id": {
// something like this
".write": "auth.id === $record_id && data.val() === true"
}
}
}
}
You can perform the set in V2 angularFire by using the $set and $child commands.
dataRef.$child( user.id ).set( true ); // for example

Resources