Does entry to DynamoDB depends on the model class of you application? - amazon-dynamodb

I have a model class which has four member variables like this:
A.java
1.Attribute-1
2.Attribute-2
3.Attribute-3
4.Attribute-4
Now I make entries to the DynamoDB and the entries go successful and I can see the entries going.
Now I make few changes in the model and add 2 more attributes.
5.Attribute-5
6.Attribute-6
And in the application I am setting the values of these 2 attributes there.
Now when I try to insert some entry into the application then I am getting this error.
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
What is the error

I believe you are using DynamoDBMapper class in AWS SDK JAVA with save() method.
1) Firstly, yes, you need to set the key values in the model object when you perform the update on existing item. save() performs either create or update the item
2) Secondly, save method has a save behavior config. You need to set the behavior accordingly based on your use case
public void save(T object,
DynamoDBSaveExpression saveExpression,
DynamoDBMapperConfig config)
UPDATE (default) : UPDATE will not affect unmodeled attributes on a
save operation and a null value for the modeled attribute will remove
it from that item in DynamoDB. Because of the limitation of updateItem
request, the implementation of UPDATE will send a putItem request when
a key-only object is being saved, and it will send another updateItem
request if the given key(s) already exists in the table.
UPDATE_SKIP_NULL_ATTRIBUTES : Similar to UPDATE except that it ignores
any null value attribute(s) and will NOT remove them from that item in
DynamoDB. It also guarantees to send only one single updateItem
request, no matter the object is key-only or not.
CLOBBER : CLOBBER
will clear and replace all attributes, included unmodeled ones,
(delete and recreate) on save. Versioned field constraints will also
be disregarded. Any options specified in the saveExpression parameter
will be overlaid on any constraints due to versioned attributes.
Reference link
Sample Code:-
Movies movies = new Movies();
//setting the partition and sort key
movies.setYearKey(1999);
movies.setTitle("MyMovie BOOL");
//New attribute added
movies.setProducts(Stream.of("prod1").collect(Collectors.toSet()));
DynamoDBMapper dynamoDBMapper = new DynamoDBMapper(dynamoDBClient);
//Set the save behavior, so that it wouldn't impact the existing attributes. Refer the full definition above
dynamoDBMapper.save(movies, SaveBehavior.UPDATE_SKIP_NULL_ATTRIBUTES.config());

Related

Why does Container.CreateItemAsync have a partitionKey parameter?

With regards to the partitionKey parameter, the docs for Container.CreateItemAsync says:
PartitionKey for the item. If not specified will be populated by extracting from {T}
It is true, when left as null the API works and uses the partition key in the provided document model. However, the reverse does not work -- if I do not provide a partition key in the model (no property at all so that a field is not even serialized), but I provide the parameter, I get this error:
PartitionKey extracted from document doesn't match the one specified in the header
So there is never a case where I am not required to provide the partition key in the model, but I can optionally provide it as a parameter.
Then why have the parameter at all?
It maybe helpful if you're going through the source code.
When it calls public abstract Task<ItemResponse> CreateItemAsync() method, the implemented method public async Task<ItemResponse> CreateItemAsync() is actually executed.
Inside that method, you can see it uses private async Task ExtractPartitionKeyAndProcessItemStreamAsync() method to process partitionKey like below:
If user specifies a partitionKey, it will check the value is valid or not, then decide to use the partitionKey or not. The code is here.
If user does not specify a partitionkey, this code will be executed to automatically fetch it from model. The code is here.
So in short, the answer is that the parameter here is for someone who wants to specify a custom partitionKey(but should be valid, the source code will check it), and does not want to automatically extract partitionKey from model.

VueFire: Resolving references after setting on the client

According to the docu "[...] VueFire will automatically bind up to one nested references.". That works well, if I retrieve an object (map) from the database with a property being a ref: The ref gets ressolved automatically on the client (ref_property will not hold the path to the object (e.g. users/123) but the actual data ({username: 'john', hometown: 'autumn'}).
Question is: How do I update a ref_property (e.g. suppose a last_edit_by_ref) on the client in a way, that a.) VueFire is able to resolve this to a valid JSON for the UI and b.) make sure that it's stored as a ref in the database at the same time?
I tried to fetch the referenced object (again) from the collection as explained here ("To write a reference to a document, you pass the actual reference object"). The issue with this is however, that VueFire does not ressolve this, leading to empty values in the UI:
post.last_edit_by_ref = db.collection('users').doc('123')
Background: If I'm setting plain JSON, the property is no longer stored as a reference in the database. This is bad, since the linked object is likely to be changed (and the linking objekt would then hold copied, outdated data).
It does not related to VueFire. It is how firebase parse the Object it get in the set/update methods.
If you focus on this part:
const data = {
age: 18,
name: "John",
carRef: db.collection('cars').doc('john-car'),
}
await db.collection('users').doc('john').set(data);
You will have the ref in firestore. And in turn, VueFire will automatic bind the object.
For your case, i think you will need to find a way to get the db.collection('users').doc(last_edit_user_id) to make the ref for post.

Symfony Workflow - Is it possible to use doctrine relation as state holder?

I am trying to adopt the Symfony workflow component to my app.
As documentation says marking_store points to string. I was googling for it - it can be a string or json_array-field of a Doctrine entity.
But what if I have an Entity BlogPost with a relation BlogPostStatus that have two fields: some primary id and statusName. Can I configure the workflow component to change statuses of my BlogPost (i.e set new BlogPostStatus to BlogPost entity) and persist it to database?
Now I have only one solution: Add to my BlogPost entity non-mapped field and when it's changed change status of Entity.
Do you have a better solution?
For all built-in marking_store implementations the following is true:
If the functions setMarking or getMarking exist on the object holding the state, they will be used to set or get the marking respectively.
There are 3 built-in marking stores, the SingleStateMarkingStore (using the property accessor, hence setMarking/getMarking), the MultiStateMarkingStore (same), the MethodMarkingStore (explicitly calling those functions, you can change the function via the property setting of your marking_store config).
The difference lies within the argument provided in the setMarking call, for single state (this is the state_machine type, and by default NOT the the workflow type), the argument is the place (or state) where the mark is placed. For multi state (workflow type by default), the argument is an array where the keys are places and the values are marks, usually the marks are 1, and empty places are omitted.
So, I'll assume that your BlogPost (currently) only has a single state at any given time, and what you have to do now is to transform the marking given into the status entity - I will assume your workflow has type state_machine:
/** in class BlogPost */
public function setMarking(string $marking/*, array $context*/) {
$this->status->statusName = $marking;
}
public function getMarking() {
return $this->status->statusName;
}
special cases
If the BlogPostStatus should be a different one (for example, a constant object), then you'd have to use the new interface that dbrumann linked, and hook into the event to add that to the context.
If the BlogPostStatus may not exist at the time of the setMarking/getMarking, you have to create it on the fly in the setter and check for it in the getter. But I'm sure you're capable of doing that ;o)
Also if you're not using the single state workflows but multi state instead, you have to find a way to transform the array of (places->marks) into your status object and vice versa.

How do you use optimistic concurrency with WebAPI OData controller

I've got a WebAPI OData controller which is using the Delta to do partial updates of my entity.
In my entity framework model I've got a Version field. This is a rowversion in the SQL Server database and is mapped to a byte array in Entity Framework with its concurrency mode set to Fixed (it's using database first).
I'm using fiddler to send back a partial update using a stale value for the Version field. I load the current record from my context and then I patch my changed fields over the top which changes the values in the Version column without throwing an error and then when I save changes on my context everything is saved without error. Obviously this is expected, the entity which is being saved has not been detacched from the context so how can I implement optimistic concurrency with a Delta.
I'm using the very latest versions of everything (or was just before christmas) so Entity Framework 6.0.1 and OData 5.6.0
public IHttpActionResult Put([FromODataUri]int key, [FromBody]Delta<Job> delta)
{
using (var tran = new TransactionScope())
{
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
delta.Patch(j);
this._context.SaveChanges();
tran.Complete();
return Ok(j);
}
}
Thanks
I've just come across this too using Entity Framework 6 and Web API 2 OData controllers.
The EF DbContext seems to use the original value of the timestamp obtained when the entity was loaded at the start of the PUT/PATCH methods for the concurrency check when the subsequent update takes place.
Updating the current value of the timestamp to a value different to that in the database before saving changes does not result in a concurrency error.
I've found you can "fix" this behaviour by forcing the original value of the timestamp to be that of the current in the context.
For example, you can do this by overriding SaveChanges on the context, e.g.:
public partial class DataContext
{
public override int SaveChanges()
{
foreach (DbEntityEntry<Job> entry in ChangeTracker.Entries<Job>().Where(u => u.State == EntityState.Modified))
entry.Property("Timestamp").OriginalValue = entry.Property("Timestamp").CurrentValue;
return base.SaveChanges();
}
}
(Assuming the concurrency column is named "Timestamp" and the concurrency mode for this column is set to "Fixed" in the EDMX)
A further improvement to this would be to write and apply a custom interface to all your models requiring this fix and just replace "Job" with the interface in the code above.
Feedback from Rowan in the Entity Framework Team (4th August 2015):
This is by design. In some cases it is perfectly valid to update a
concurrency token, in which case we need the current value to hold the
value it should be set to and the original value to contain the value
we should check against. For example, you could configure
Person.LastName as a concurrency token. This is one of the downsides
of the "query and update" pattern being used in this action.
The logic
you added to set the correct original value is the right approach to
use in this scenario.
When you're posting the data to server, you need to send RowVersion field as well. If you're testing it with fiddler, get the latest RowVersion value from your database and add the value to your Request Body.
Should be something like;
RowVersion: "AAAAAAAAB9E="
If it's a web page, while you're loading the data from the server, again get RowVersion field from server, keep it in a hidden field and send it back to server along with the other changes.
Basically, when you call PATCH method, RowField needs to be in your patch object.
Then update your code like this;
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
// Concurrency check
if (!j.RowVersion.SequenceEqual(patch.GetEntity().RowVersion))
{
return Conflict();
}
this._context.Entry(entity).State = EntityState.Modified; // Probably you need this line as well?
this._context.SaveChanges();
Simple, the way you always do it with Entity Framework: you add a Timestamp field and put that field's Concurrency Mode to Fixed. That makes sure EF knows this timestamp field is not part of any queries but is used to determine versioning.
See also http://blogs.msdn.com/b/alexj/archive/2009/05/20/tip-19-how-to-use-optimistic-concurrency-in-the-entity-framework.aspx

Handling client-side domain object state in a presentation model

I'm currently building the client side of a Flex/PHP project using the Presentation Model pattern.
What I'm trying to achieve:
I currently have a view displaying non-editable information about a domain object called Node. Depending on if the Node is editable and the user has the right privileges, an additional view becomes available where it's possible to make changes to this object. Any changes made are only committed to the server once the user decides to "Save Changes". If changes are made to a NodeA and the user navigates away to a different NodeB without saving them, NodeA is reverted to its original state.
Design:
I have a PM for the info view holding a reference to the current Node. The PM for the edit view is extended from this info PM, adding methods to make changes to the wrapped Node object. Both PMs has the same Node reference injected into them and all fields in the info/edit views are bound to the Node via their PMs.
The problem:
When the user makes changes to NodeA but doesn't commit them, I can't seem to think of an elegant solution to revert back to the original state. Basically, what I've thought of so far is to hold separate value copies on the edit PM, either clone-creating a new Node reference or through an identical set of Node properties. Of these two the former seems like the better idea because the Node already houses domain logic, but I wonder whether creating clones of unique domain objects is a bad practice, even if it's used in a limited scope.
I handle similar cases by storing the original data in an XML property of the Value Object ("VO"), and reset all of the other property values when the VO is needed.
So, when it is first needed to be viewed, I go get the XML:
<Node>
<prop1>value</prop1>
<prop2>value</prop2>
<prop3>value</prop3>
<prop4>value</prop4>
</Node>
When I retrieve the XML, in my result handler, the first thing I do is create an instance of my VO, and set the XML property, and then call a public function in a separate class to set the VO's properties:
private function getNodeResultHandler(event:ResultEvent):void
{
var myNode:Node = new Node();
myNode.xmlData = new XML(event.result);
nodeUtils.setNodeProperties(myNode);
}
public class nodeUtils
{
public function setNodeProperties(node:Node):void
{
var nodeXmlData:XML = node.xmlData;
myNode.prop1 = nodeXmlData.prop1;
myNode.prop2 = nodeXmlData.prop2;
myNode.prop3 = nodeXmlData.prop3;
myNode.prop4 = nodeXmlData.prop4;
}
}
Then, any time you switch your view to edit mode, you call that same function to reset the properties to the values stored in the XML.
The only other thing you need to do is reset that XML any time the user commits changes to the VO. I usually handle this by passing back the VO's data in the same format on a Save and Get, and then saving the XML just as above.
I usually do this in a Cairngorm MVC application, so I have event/command chains to handle all of this, but you can put this functionality in any number of classes, or in the VO class itself, whichever is easiest for you to maintain.
Each view should have it's own instance of your Presentation Model class. Just maintain it in memory if the user has not saved it when moving to another view. Cloning accomplishes basically the same thing through a more convoluted process.

Resources