How to create DynamoDB tables in on-demand capacity mode on Amplify? - amazon-dynamodb

DynamoDB has two pricing models:
provisioned capacity mode and on-demand capacity.
Amplify always creates tables in provisioned capacity mode.
is there have an option to have tables get created with on-demand capacity by default ?
C:\user\samadhan\ampplify_project>amplify add storage
? Select from one of the below mentioned services: NoSQL Database
Welcome to the NoSQL DynamoDB database wizard
This wizard asks you a series of questions to help determine how to set up your NoSQL database table.
√ Provide a friendly name · DynamoDB
√ Provide table name · AuthorazationsTable
You can now add columns to the table.
√ What would you like to name this column · id
√ Choose the data type · string
√ Would you like to add another column? (Y/n) · yes
√ What would you like to name this column · name
√ Choose the data type · string
√ Would you like to add another column? (Y/n) · no
Before you create the database, you must specify how items in your table are uniquely organized. You do this by specifying a primary key. The primary key uniquely identifies each item in the table so that no two items can have the same key. This can be an individual column, or a combination that includes a primary key and a sort key.
To learn more about primary keys, see:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey
√ Choose partition key for the table · id
√ Do you want to add a sort key to your table? (Y/n) · yes
√ Choose sort key for the table · name
You can optionally add global secondary indexes for this table. These are useful when you run queries defined in a different column than the primary key.
To learn more about indexes, see:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.SecondaryIndexes
√ Do you want to add global secondary indexes to your table? (Y/n) · no
√ Do you want to add a Lambda Trigger for your Table? (y/N) · no
✅ Successfully added resource DynamoDB locally
C:\user\samadhan\amplify\backend\storage\DynamoDB\cli-inputs.json
{
"resourceName": "DynamoDB",
"tableName": "AuthorazationsTable",
"partitionKey": {
"fieldName": "id",
"fieldType": "string"
},
"sortKey": {
"fieldName": "name",
"fieldType": "string"
},
"gsi": [],
"triggerFunctions": []
}
C:\user\samadhan\amplify\backend\storage\DynamoDB\build\DynamoDB-cloudformation-template.json
{
"Description": "DDB Resource for AWS Amplify CLI",
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"DynamoDBTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
},
{
"AttributeName": "name",
"KeyType": "RANGE"
}
],
"AttributeDefinitions": [
{
"AttributeName": "id",
"AttributeType": "S"
},
{
"AttributeName": "name",
"AttributeType": "S"
}
],
"GlobalSecondaryIndexes": [],
"ProvisionedThroughput": {
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"StreamSpecification": {
"StreamViewType": "NEW_IMAGE"
}
C:\user\samadhan\amplify\backend\storage\DynamoDB\build\parameters.json"
{
"tableName": "SefieAuthorizations",
"partitionKeyName": "id",
"partitionKeyType": "S",
"sortKeyName": "name",
"sortKeyType": "S"
}

It's bit confusing but I finally managed to find the way to configure this. You're gonna have to use something called "override" function of Amplify Cli.
Document: Customize Amplify-generated DynamoDB tables
type command amplify override storage and enter.
It says "Which resource would you like to override?" -> Select table you want to customize.
It generates "override.ts" file under the resource folder you selected.
"Do you want to edit override.ts file now? (Y/n)" -> Yes.
"Choose your default editor" -> select your editor.
edit "override.ts" something like this:
import { AmplifyDDBResourceTemplate } from '#aws-amplify/cli-extensibility-helper';
export function override(resources: AmplifyDDBResourceTemplate) {
delete(resources.dynamoDBTable.provisionedThroughput)
resources.dynamoDBTable.billingMode = "PAY_PER_REQUEST"
}
after saving it, run amplify push
It'll take minutes to complete but I hope it succeeds.
If you have one GSI, override.ts looks like this.
import { AmplifyDDBResourceTemplate } from '#aws-amplify/cli-extensibility-helper';
export function override(resources: AmplifyDDBResourceTemplate) {
delete(resources.dynamoDBTable.provisionedThroughput);
delete(resources.dynamoDBTable.globalSecondaryIndexes[0].provisionedThroughput);
resources.dynamoDBTable.billingMode = "PAY_PER_REQUEST";
}

Related

How to force AWS Console Item Explorer to show all columns?

When I query in the AWS Web Console's Item explorer for one of my dynamodb "tables", the resulting document does not show one of the "columns" that exists in the "record" (or doesn't show one of the "properties" that exists in the "document" if you prefer the document store terminology).
How do I make it show all the columns??
e.g. the following is the result from querying the dynamodb in the aws cli, but in the Item explorer of the aws console (on the web), the thisThingsMissingInConsole "column" isn't shown (and neither is it available in the Select visible columns preference)
{
"Items": [
{
"email": {
"S": "my#e.mail"
},
"thisThingsMissingInConsole": {
"SS": [
"a",
"b",
"c"
]
},
...
},
...
]
}
In my case, when confirming via the CLI, i accidentally checked the record in a different environment than I was browsing in the web console 🤦‍♂️. So in my case it turns out that the property that isn't shown is actually missing 🤬.

Is there a way to differentiate between query and data conditions

I have a feathersjs service called documentations. When patching a documentation the user could set the editing field only to his own user._id or to null. Also the user can only patch documenations of his own company.
I have stored my permissions in a MongoDB Database:
...
{
"action": "patch",
"subject": "documentation",
"fields": ["editing", "sections", "title", "published"],
"conditions": {
"company": "${user.belongsTo}"
}
}
...
Is there a way to implement the editing field logic with CASL?
Is there some way to differentiate between query and data conditions?
Frankly speaking this kind of logic in my opinion is BL concern not permissions concern. But you can do smth similar with CASL too, create a separate rule for editing field:
can(“update”, “documentation”, { company: ..., editing: { $in: [null, user.id] } }, [“editing”])
So, each time you update editing just check permissions

AppSync BatchResolver AssumeRole Error

I’m trying to use the new DynamoDB BatchResolvers to write to two DynamoDB table in an AppSync resolver (currently using a Lambda function to do this). However, I’m getting the following permission error when looking at the CloudWatch logs:
“User: arn:aws:sts::111111111111:assumed-role/appsync-datasource-ddb-xxxxxx-TABLE-ONE/APPSYNC_ASSUME_ROLE is not authorized to perform: dynamodb:BatchWriteItem on resource: arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
I’m using TABLE-ONE as my data source in my resolver.
I added the "dynamodb:BatchWriteItem" and "dynamodb:BatchGetItem" to TABLE-ONE’s permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-ONE/*",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO",
"arn:aws:dynamodb:us-east-1:111111111111:table/TABLE-TWO/*"
]
}
]
}
I have another resolver that uses the BatchGetItem operation and was getting null values in my response - changing the table’s policy access level fixed the null values:
However, checking the box for BatchWriteItem doesn’t seem to solve the issue either adding the permissions to the data source table’s policy.
I also tested my resolver test feature in AppSync, the evaluated request and response are working as intended.
Where else could I set the permissions for a BatchWriteItem operation between two tables? It seems like it's invoking the user's assumed-role instead of the table's role - can I 'force' it to use the table's role?
It is using the role that you have configured for the table in the AppSync console. Note that that particular role, should have appsync as a trusted entity.
Or if you use the new role tick box when creating the data source in the console, it should take care of it.
variable "dynamodb_table" {
description = "Name of DynamoDB table"
type = any
}
# value of var
dynamodb_table = {
"dyn-notification-inbox" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable
}
"dyn-notification-count" = {
type = "AMAZON_DYNAMODB"
table = data.aws_dynamodb_table.dyntable2
}
}
locals {
roles_arns = {
dynamodb = var.dynamodb_table
kms = var.kms_keys
}
}
data "aws_iam_policy_document" "invoke_dynamodb_document" {
statement {
actions = [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:UpdateItem",
"dynamodb:Query"
]
# dynamic dynamodb table
# for dynamodb table v.table.arn and v.table.arn/*
resources = flatten([
for k, v in local.roles_arns.dynamodb : [
v.table.arn,
"${v.table.arn}/*"
]
])
}
}
# make policy
resource "aws_iam_policy" "iam_invoke_dynamodb" {
name = "policy-${var.name}"
policy = data.aws_iam_policy_document.invoke_dynamodb_document.json
}
# attach role
resource "aws_iam_role_policy_attachment" "invoke_dynamodb" {
role = aws_iam_role.iam_appsync_role.name
policy_arn = aws_iam_policy.iam_invoke_dynamodb.arn
}
Result:
resources: [
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table',
'arn:aws:dynamodb:eu-west-2:xxxxxxxx:table/my-table/*'
]

API Structure for an application

I would like some help to build something a la https://pokeapi.co/ .
I have a problem when I try to make the following structure:
"forms": [
{
"url": "https://pokeapi.co/api/v2/pokemon-form/1/",
"name": "bulbasaur"
}
],
"stats": [
{
"stat": {
"url": "https://pokeapi.co/api/v2/stat/6/",
"name": "speed"
},
"effort": 0,
"base_stat": 45
},
]
Directus works fine when I have one relation field such as forms (make a new relation field to forms, get Bulbasaur, done)
I would have build monster and the stat table, and I need to give a value to the relation field stat (in that case, speed) of 45.
I tried to fiddle around with Directus with no success.
Hey André – it seems like this is more of a database architecture question. But here is a schema I would use:
monsters
id
name
stats (ALIAS: Many-to-Many interface relationship)
monster_stats (Junction table for the many-to-many)
id
monster_id
stat_id
stats
id
name
effort
base_stat

Google Cloud Datastore runQuery returning 412 "no matching index found"

** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.

Resources