DynamoDB: How to specify version attribute through schema - amazon-dynamodb

#DynamoDBVersionAttribute
private long version;
We are not using annotations in our project. So for every model, we have a schema definition. I am not sure how to specify a version attribute field through schema definition.
StaticTableSchema.builder(User.class)
.newItemSupplier(User::new)
.addAttribute(String.class, a -> a.name("userName")
.getter(User::getUserName)
.setter(User::setUserName)
.tags(secondaryPartitionKey("userName-index"))) // GSI
.addAttribute(Integer.class, a -> a.name("id")
.getter(User::getId)
.setter(User::setId)
.tags(primaryPartitionKey())) // Primary Key
/*.addAttribute(Long.class, a -> a.name("version")
.getter(User::getVersion)
.setter(User::setVersion)
.tags(VersionRecordAttributeTags.attributeTagFor(null))
)*/
.build();

Based on the AWS docs for optimistic locking (e.g., C# and Java) this a feature of the mapping SDK, not a feature of DynamoDB or the table-attribute itself. If you look at the CreateTable API, there is no way to add an "automatic" versioning column. Looking at other DDB table APIs, there's no way to add it after the fact (like with TTL).
In order to accomplish this, you'll have to encode the use of Condition Expressions to effect a conditional update, setting an expectation for the pre-update value of the version attribute.
This answer has more information with a code example.

Use this tag "VersionedRecordExtension.AttributeTags.versionAttribute())"
See an example below:
.addAttribute(Long.class, a -> a.name("version")
.getter(Foo::getVersion)
.setter(Foo::setVersion)
.tags(VersionedRecordExtension.AttributeTags.versionAttribute()))

Related

Azure Mobile Apps - Overriding the QueryAsync method with custom code in Table Controller

I would like to override the Query Async Method with some Custom code, so I can access an external api, get some data and then use this data to query the db tables to get the results, this should support all the default sync and paging features provided by the base method. I'm using the latest 5.0 DataSync library. Old versions returned a IQueryable so this was easy to do, but now it returns an action result. Is there any solution? Could not find any docs.
e.g. I get a set of guids from api. I query the db tables with these guids and get all the matching rows and return back to client.
I know how to override the method, call external api's, get data and query the db tables, but this provides me with a custom data format not the one that the query async gives by default with paging support.
The way to accomplish this is to create a new repository, swapping out the various CRUD elements with your own implementation. You can use the EntityTableRepository as an example. You will note that the "list" operation just returns an IQueryable, so you can do what you need to do and then return the updated IQueryable.

oData v4 - ordering outer entity on property in related one-to-many entities

I have an oData model with a couple of one-to-many relationship, say person->addresses and person->driving-licences. I would like to be able to sort the result set based on properties in the address entity and driving licence entity. As there could be more than one address, I would initially select a single item from the addresses set, based on a property called IsPrimary. As there could be more that one driving licence, I would select the 'UK' driving licence. Is this possible?
I was hoping I could do something like:
/people?$expand=addresses($filter=isPrimary eq true),drivinglicences($filter=country eq 'UK')&$orderby=addresses/postcode,drivinglicences/active
Unfortunately I get the following error:
"The query specified in the URI is not valid. The parent value for a property access of a property 'isPrimary' is not a single value. Property access can only be applied to a single value."
Does anyone know if what I'm trying to do is supported by the spec? Or whether it is an issue with my query? Or whether it is an issue with the .NET library.
I'm using:
Microsoft.AspNet.OData - 7.2.3
Many Thanks.
What you see here is by design, or rather not supported by the specification, the error message even highlights the only type of expressions supported:
The query specified in the URI is not valid. The parent value for a property access of a property 'isPrimary' is not a single value. Property access can only be applied to a single value.
So the simplest solution is to modify the API either to include a Function bound to the people collection that applies the $filter or $order directly, or a Function that returns the data in a new shape, one that only has perhaps a singleton PrimaryAddress property. How you include driving license in this result is up to you, it could even be a parameter to the function, perhaps your people controller has a queryable function with this signature:
[EnableQuery]
public IHttpActionResult WithLicences(string countryCode)
However that is out of the scope of OPs question about specific syntax support
Although it seems like an important feature, we must remember that $select (Projection) and $filter are evaluated at different points in time, OData queries follow a similar execution sequence to SQL however the filter criteria and $orderby are evaluated separately, and the projection of the resultset is the last evaluation to be applied.
Due to $filter and $orderby being applied independently, neither concept is even aware of the other and as such neither can reference or assume to be applied before the other.
You can prove this by specifying a field in the $orderby and/or $filter that is not included in the $select, you can even reference singleton navigation fields that are not included in an $expand and the query will evaluate correctly.
The OData spec is similar to a law document, in that to properly understand and apply it we need to understand the original intent of the authors. We can get an initial understanding from the early listing of Addressing Entities
Addressing Entities describes functions that can be bound to collections or entities that return either a single entity or a collection of entities
By allowing special provision of custom functions to be applied the authors are encouraging API designers to provide natural extensions to their resource endpoints that can facilitate the execution of pre-determined queries that may be otherwise complex or problematic to express in pure OData query syntax.
In other words, we are encouraged to customise our APIs to make them easier for the end process to consume, and to guide the consuming developer to make the best use of the API, they shouldn't have to discover everything from first principals.
To achieve OPs type of query in pure SQL would still require either a nested lookup, CTE or self join... advanced syntax. In OData v4, the specification does not provide a syntax for targeting specific items within a collection for path expressions (of which $orderby derives from)
5.1.1.15 Path Expressions
Properties and navigation properties of the entity type of the set of resources that are addressed by the request URL can be used as operands or function parameters, as shown in the preceding examples.
Properties of complex properties can be used via the same syntax as in resource paths, i.e. by specifying the name of a complex property, followed by a forward slash (/) and the name of a property of the complex property, and so on,
Properties and navigation properties of entities related with a target cardinality 0..1 or 1 can be used by specifying the navigation property, followed by a forward slash (/) and the name of a property of the related entity, and so on.
If a complex property is null, or no entity is related (in case of target cardinality 0..1), its value, and the values of its components, are treated as null.
RE: I couldn't find anything explicit in the spec. :)
That is the very thing about the OData specification,the specification does not list what is not supported, only what should be supported. So by omission, if you cannot find a reference to how to do something, then that something is not required to be supported.
Introduction http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_Introduction ... This specification defines a set of recommended (but not required) rules for constructing URLs to identify the data and metadata exposed by an OData service as well as a set of reserved URL query string operators, which if accepted by an OData service, MUST be implemented as required by this document.
This has been on ongoing discussion held in may threads, recently https://stackoverflow.com/a/55324393/1690217
Many people complain that this is surely a fundamental feature of a data access platform, however it is important to respect the original intent of the OData platform and keep our APIs simple by providing customised endpoints to suit our business domain.

AWS Amplify + Appsync - Is it possible to use #connection transform to cascade delete related data?

I am developing a web application using AWS Amplify and AppSync to read and write my data to DynamoDB tables. Using the Amplify's GraphQL Transforms, it is easy enough to establish a connection between data types using the #connection transform. I wish to know if it is possible to delete related data in a simplified or semi-automated way.
Supposing a simple blog example, where a user has a blog, which has posts, which in turn has comments owned by other users. If a post is deleted, I would like to delete the comments associated with that post. If a user is deleted, I would like to delete their blog(s), posts, and comments related to those posts, and any comment the user has left on other posts. This example is contrived in that perhaps it is desirable to have some of this data be maintained in some form. However, in some cases this behaviour is exactly what I am looking for.
When working with Prisma in the past, I used their #relation directive to make a relationship similar to using Amplify's #connection.
However, in cases where I wanted cascading deletion, I would write something along the lines of:
type Post {
id: ID! #unique
title: String!
body: String!
owner: ID!
comments: [Comment!] #relation(name: "PostComments",
onDelete: CASCADE)
}
I could use and set the onDelete parameter to CASCADE or SET_NULL depending on how I wanted to handle it.
Is there a way to do something similar through Amplify? Of course I can write a bunch of VTL or Lambda resolvers to handle each case, but I wanted to check first if there is a faster / easier way to implement this.
This is not yet supported natively by Amplify. As you said, you are able to replicate this behavior using pipeline resolvers & some VTL and then deploy that via the Amplify CLI or on your own. There are plans to allow you to write your own transformers to encode reproducible logic like this as a resolver (see https://github.com/aws-amplify/amplify-cli/issues/1060) as well as plans to move towards pipeline resolvers for all Amplify CLI projects (see https://github.com/aws-amplify/amplify-cli/issues/1055).

Creating blank dummy Components which contain mandatory Fields with the SDL Tridion 2011 Core Service

I wanted to create a blank Component in SDL Tridion 2011 using the Core Service. The only information I have at the start of the process is the Schema URI. The Schema may contain any kind of field (text, rtf, number date, embedded etc), some of which may be mandatory.
I understand that for the mandatory fields, I will need to save some dummy value in them, and this is acceptable as they will be changed manually later.
How can i achieve this?
First - you make sure all fields are set to optional in the schema, otherwise this will never work.
Second - You save.
When an optional field has no value, it will have no XML representation. If you have a schema that defines a component like this:
Field1
Field2
Field3
When all fields are optional and you save a value in Field 2, Tridion will store the following:
<Content xmlns="yourNamespace"><Field2>SomeValue</Field2></Content>
If one of your fields is not mandatory, then you'll have to provide a value. If you're using the CoreService then you can use ReadSchemaFields class to get the fields and some information about them - what type, mandatory/optional, etc.
Looking at your question/requirement to understand what you're exactly looking for, so we can answer the best possible and relevant.
Are you asking for "How can you write a generic code for component creation using core service?" instead of creating a component with a specific schema knowing all the fields upfront.
If that is what you are looking for, here is what you need to do:
You need to read the schema fields with CoreService (since you know the schema URI)
Now you know what type of fields (embedded/component link etc) you need to create content for
use the links pointed by "Puf" in his answer.
Please note that, if the field is marked as required in Tridion Schema you must have to fill a value and it has to match the field type defined in schema.
Reading schema fields via Core Service sample code can be found here
Updating a Component's field through the Core Service is already answered here: Updating Components using the Core Service in SDL Tridion 2011
That post points to a helper class you can find here: Updating Components using the Core Service in SDL Tridion 2011
If those don't help you in creating a Component, I suggest you post your code instead of asking us to write it for you.
We ask about use case, because code to fill in specific fields for a specific schema only works in one environment. Code that can automatically determine fields is re-usable.
If the use case is for an Tridion setup that has Inline Editing (Experience Manager or SiteEdit), then the correct approach is content/component types. These define a reference component with "junk defaults," instructions to the author, and even save location context.
If the use case is to allow authors the ability to create dummy components, this is out-of-the box with:
CTRL+C
CTRL+V
One-time setup required to create a "reference component." Of course we can mimic this behavior (in case "Copy of Untitled" isn't an appropriate name) by copying items with the core service.
In that case, I'll also do a copy--see a general solution for creating Tridion items using the Core Service.
Fields that require a default can have an actual default in the schema.
"Junk values" don't help authors much, always consider good defaults such as an appropriate selection or instructions in the case of fields (maybe). A 10 second change costs development practically nothing, but impacts all future components and the authors that create them.

SQLAlchemy, reflection, different backends and table/column case insensitivity

Intro: I'm writing web interface with SQLAlchemy reflection that supports multiple databases. It turns out that authors of application defined postgresql with lowercase tables/columns, eg. job.jobstatus while sqlite has mixed case, eg Job.JobStatus. I'm using DeclarativeReflectedBase from examples to combine reflection and declarative style.
The issue: Configure SQLAlchemy to work with tables/columns case insensitive with reflection
I have done so far:
I have changed DeclarativeReflectedBase.prepare() method to pass quote=False into Table.__init__
What is left to be solved:
relationship definitions still has to obey case when configuring joins, like primaryjoin="Job.JobStatus==Status.JobStatus".
configure __tablename__ based on engine type
The question: Are my assumptions correct or is there more straightforward way? Maybe I could tell reflection to reflect everything lowercase and all problems are gone.
you'd probably want to look into defining a ".key" on each Column that's in lower case, that way you can refer to columns as lower case within application code. You should use the column_reflect event (See http://docs.sqlalchemy.org/en/latest/core/events.html#schema-events) to define this key as a lower case version of the .name.
then, when you reflect the table, I'd just do something like this:
def reflect_table(name, engine):
if engine.dialect.name == 'postgresql':
name = name.lower()
return Table(name, autoload=True, autoload_with=engine)
my_table = reflect_table("MyTable", engine)
I think that might cover it.

Resources