With regards to the partitionKey parameter, the docs for Container.CreateItemAsync says:
PartitionKey for the item. If not specified will be populated by extracting from {T}
It is true, when left as null the API works and uses the partition key in the provided document model. However, the reverse does not work -- if I do not provide a partition key in the model (no property at all so that a field is not even serialized), but I provide the parameter, I get this error:
PartitionKey extracted from document doesn't match the one specified in the header
So there is never a case where I am not required to provide the partition key in the model, but I can optionally provide it as a parameter.
Then why have the parameter at all?
It maybe helpful if you're going through the source code.
When it calls public abstract Task<ItemResponse> CreateItemAsync() method, the implemented method public async Task<ItemResponse> CreateItemAsync() is actually executed.
Inside that method, you can see it uses private async Task ExtractPartitionKeyAndProcessItemStreamAsync() method to process partitionKey like below:
If user specifies a partitionKey, it will check the value is valid or not, then decide to use the partitionKey or not. The code is here.
If user does not specify a partitionkey, this code will be executed to automatically fetch it from model. The code is here.
So in short, the answer is that the parameter here is for someone who wants to specify a custom partitionKey(but should be valid, the source code will check it), and does not want to automatically extract partitionKey from model.
Related
I read data from Firebase database into a Kotlin/Android program. The key names in Firebase are different from those of the corresponding Kotlin variables. I test this code with flat JSON files (for good reasons) where I retain the same key names used in Firebase, so I need to transcribe them too.
Firebase wants to annotate variables with #PropertyName; but Gson, which I use to read flat files, wants #SerializedName (which Firebase doesn't understand, unfortunately.)
Through trial and error I found that this happens to work:
#SerializedName("seq")
var id: Int? = null
#PropertyName("seq")
get
#PropertyName("seq")
set
Both Firebase and Gson do their thing and my class gets its data. Am I hanging by a thin thread here? Is there a better way to do this?
Thank you!,
You can probably solve this by using Kotlin's #JvmField to suppress generation of getters and setters. This should allow you to place the #PropertyName annotation directly on the property. You can then implement a Gson FieldNamingStrategy which checks if a #PropertyName annotation is present on the field and in that case uses its value; otherwise it could return the field name. The FieldNamingStrategy has to be set on a GsonBuilder which you then use to create the Gson instance.
In firestore, request.resource.data.size() is equivalent to the size of the document in its final form. My question is, how can I get the parameters that are being sent from the client?
Meaning, if I the client tries to update the property name, then I want to check if the client has updated name and the size of the parameters he sent is just one parameter. I would've used hasExact() if it existed, but the problem is that I'm not sure if there's an object the specifies the requested parameters.
With the current request.resource.data.size(), I'm not sure how can do the following operations:
Deny writing updatedAt property (which is being updated as the server timestamp on each update) without an additional property.
Deny updating a property that is already equivalent to the requested value.
It's difficult to tell from your question exactly what you want to do. It doesn't sound like the size of the update is the only thing you need to be looking at. Without a more concrete example, I am just going to guess what you need
But you should know that request.resource.data is a Map type object. Click through to the linked API documentation to find out what you can do with a Map. That map will contain all the fields of a document that's being updated by the client. If you want the value of one of those fields, you can say request.resource.data.f where f is the name of the field. This should help you express your logic.
If you want the value of an existing field of a document, before it's written, use the map resource.data, which works the same way.
We have a aggregate root as follows.
#AggregateRoot
class Document {
DocumentId id;
}
The problem statement given by the client is "A document can have multiple document as attachments"
So refactoring the model will lead to
//Design One
#AggregateRoot
class Document {
DocumentId id;
//Since Document is an aggregate root it is referenced by its id only
Set<DocumentId> attachments;
attach(Document doc);
detach(Document doc);
}
But this model alone won't be sufficient as the client wants to store some meta information about the attachment, like who attached it and when it was attached. This will lead to creation of another class.
class Attachment {
DocumentId mainDocument;
DocumentId attachedDocument;
Date attachedOn;
UserId attachedBy;
//no operation
}
and we could again refactor the Document model as below
//Design Two
#AggregateRoot
class Document {
DocumentId id;
Set<Attachment> attachments;
attach(Document doc);
detach(Document doc);
}
The different possibilities of modeling that I could think of are given below.
If I go with design one then I could model Attachment class as an aggregate root and use Events to create them whenever a Document is attached. But it doesn't look like an aggregate root.
If I choose design two then Attachment class could be modeled as a value object or an entity.
Or If I use CQRS, I could go with design one and model Attachment as a query model and populate it using Events.
So, which is the right way to model this scenario? Is there any other way to model other what I have mentioned?
You might find in the long term that passing values, rather than entities, makes your code easier to manage. If attach/detach don't care about the entire document, then just pass in the bits they do care about (aka Interface Segregation Principle).
attach(DocumentId);
detach(DocumentId);
this model alone won't be sufficient as the client wants to store some meta information about the attachment, like who attached it and when it was attached.
Yes, that makes a lot of sense.
which is the right way to model this scenario?
Not enough information provided (the polite way of saying "it depends".)
Aggregate boundaries are usually discovered by looking at behaviors, rather than at structures or relationships. Is the attachment relationship just an immutable value that you can add/remove, or is it an entity with an internal state that changes over time? If you change an attachment, what other information do you need, and so on.
I have a model class which has four member variables like this:
A.java
1.Attribute-1
2.Attribute-2
3.Attribute-3
4.Attribute-4
Now I make entries to the DynamoDB and the entries go successful and I can see the entries going.
Now I make few changes in the model and add 2 more attributes.
5.Attribute-5
6.Attribute-6
And in the application I am setting the values of these 2 attributes there.
Now when I try to insert some entry into the application then I am getting this error.
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
What is the error
I believe you are using DynamoDBMapper class in AWS SDK JAVA with save() method.
1) Firstly, yes, you need to set the key values in the model object when you perform the update on existing item. save() performs either create or update the item
2) Secondly, save method has a save behavior config. You need to set the behavior accordingly based on your use case
public void save(T object,
DynamoDBSaveExpression saveExpression,
DynamoDBMapperConfig config)
UPDATE (default) : UPDATE will not affect unmodeled attributes on a
save operation and a null value for the modeled attribute will remove
it from that item in DynamoDB. Because of the limitation of updateItem
request, the implementation of UPDATE will send a putItem request when
a key-only object is being saved, and it will send another updateItem
request if the given key(s) already exists in the table.
UPDATE_SKIP_NULL_ATTRIBUTES : Similar to UPDATE except that it ignores
any null value attribute(s) and will NOT remove them from that item in
DynamoDB. It also guarantees to send only one single updateItem
request, no matter the object is key-only or not.
CLOBBER : CLOBBER
will clear and replace all attributes, included unmodeled ones,
(delete and recreate) on save. Versioned field constraints will also
be disregarded. Any options specified in the saveExpression parameter
will be overlaid on any constraints due to versioned attributes.
Reference link
Sample Code:-
Movies movies = new Movies();
//setting the partition and sort key
movies.setYearKey(1999);
movies.setTitle("MyMovie BOOL");
//New attribute added
movies.setProducts(Stream.of("prod1").collect(Collectors.toSet()));
DynamoDBMapper dynamoDBMapper = new DynamoDBMapper(dynamoDBClient);
//Set the save behavior, so that it wouldn't impact the existing attributes. Refer the full definition above
dynamoDBMapper.save(movies, SaveBehavior.UPDATE_SKIP_NULL_ATTRIBUTES.config());
Is it possible to invoke a method on each object that is being copied from a source to a destination collection using AutoMapper? The destination object has a method called
Decrypt() and I would like it to be called for each CustomerDTO element that is created. The only thing that I can figure out is to perform the mapping conversion and then loop again to invoke the Decrypt() method. I'd appreciate your help with this question.
Thanks,
Mike
IQueryable<CustomerDTO> dtos = AutoMapper.Mapper.Map<IQueryable<CustomerEntity>, IQueryable<CustomerDTO>>((BaseRepository.List));
foreach (var item in dtos)
{
item.Decrypt(Seed);
}
It depends if you are decrypting just a property or the whole object. I wasn't sure based on your question.
If you are just decrypting properties, then I suggest that you look into AutoMapper's Custom Value Resolvers. They allow you to take control when resolving a destination property.
If you need to decrypt the whole object, then I suggest you look into AutoMapper's Custom Type Converters. That gives you complete control over the conversion, though it does sort of take the auto out of AutoMapper.