I am using DynamoDB SDK 1.11.185 version. I have a Client entity mapped to Clients DynamoDB table through DynamoDBMapper annotations. One of the attributes contains nested values (see code examples below). I want to add a local secondary index to zone attribute from Options class. Once I want to save an object, I got a null pointer exception from this class
com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperTableModel
line 415. Looks like a bug from the libraries.
P.D. If I generate the index over clientId attribute from Client entity, everything work fine.
#DynamoDBTable(tableName="Clients")
public class Client {
private String id;
private String clientId;
private Date created;
private Options options;
public Client() {
}
public Client(String id, String clientId, Options options) {
this.id = id;
this.clientId = clientId;
this.options = options;
this.created = new Date();
}
#DynamoDBHashKey
public String getId() { return id; }
public void setId(String id) { this.id = id; }
#DynamoDBAttribute
public String getClientId() { return clientId; }
public void setClientId(String clientId) { this.clientId = clientId; }
#DynamoDBAttribute
public Options getOptions() { return options; }
public void setOptions(Options options) { this.options = options; }
#DynamoDBRangeKey
public Date getCreated() { return created; }
public void setCreated(Date created) { this.created = created; }
}
#DynamoDBDocument
public class Options {
private String zone;
public Options() {
}
public Options(String zone) {
this.zone = zone;
}
#DynamoDBIndexRangeKey(localSecondaryIndexName = "zone-index")
public String getZone() { return zone; }
public void setZone(String zone) { this.zone = zone; }
}
**************************** EDITED *************************
Correct answer by #Raniz and
Indexing on nested field
It can be done using JSON attributes though:
DynamoDB create index on map or list type
You can't use nested attributes in the key schema for an index.
I assume you've created the index with options.zone in the schema which means that DynamoDB is expecting a top-level attribute with that exact name - i.e. an attribute named options.zone and not an attribute named zone nested under the options attribute.
Excerpt from here:
The key schema for the index. Every attribute in the index key schema
must be a top-level attribute of type String, Number, or Binary. Other
data types, including documents and sets, are not allowed. Other requirements for the key schema depend on the type of index:
For a global secondary index, the partition key can be any scalar attribute of the base table. A sort key is optional, and it too can be
any scalar attribute of the base table.
For a local secondary index, the partition key must be the same as the base table's partition key, and the sort key must be a non-key
base table attribute.
To use zone in your index schema you'll need to either move or duplicate it so it's available on the top level. The easiest way of accomplishing this would probably be to add a getter to Client that returns options.zone:
#DynamoDBIndexRangeKey(localSecondaryIndexName = "zone-index")
public String getZone() {
if (options != null) {
return options.getZone();
}
return null;
}
Related
I have the following code to implement interface input parameter validation and now want to use hibernate-validator to do this
public class Order
{
private String orderNo;
private String orderId;
private String status;
private String startTime;
private String endTime;
//getter and setter...
}
public class OrderService
{
public Object search(Order order) throws Exception
{
String message = "";
if (order.getOrderId().isEmpty() && order.getOrderNo().isEmpty() && order.getStatus().isEmpty())
{
if (order.getStartTime().isEmpty() && order.getEndTime().isEmpty())
message = "xxx";
}
if (!message.isEmpty())
throw new Exception(message);
Object result = null;
// splice sql according to the attribute of order and get the result
// result = sql query result
return result;
}
}
I tried to use Hibernate-validator's group to achieve this, but if there are more parameters, I need to write a lot of groups, which seems stupid. I have more than 100 interfaces, and will be added later, using Class-level constraints would be a good idea choice?
Below is the code trying to use Hibernate-validator's group implementation:
public class Order
{
#Empty(groups = One.class)
#NotEmpty(groups = Two.class)
private String orderNo;
#Empty(groups = One.class)
#NotEmpty(groups = Three.class)
private String orderId;
#Empty(groups = One.class)
#NotEmpty(groups = Four.class)
private String status;
#NotEmpty(groups = One.class)
private String startTime;
#NotEmpty(groups = One.class)
private String endTime;
}
public class BeanValidatorUtils
{
static Validator validator;
static
{
HibernateValidatorConfiguration configuration = Validation.byProvider(HibernateValidator.class).configure();
ValidatorFactory factory = configuration.failFast(true).buildValidatorFactory();
validator = factory.getValidator();
}
public static <T> void validation(T beanParam) throws AppException
{
if (!containsGroup(beanParam, One.class))
return;
Set<ConstraintViolation<T>> validate = validator.validate(beanParam, One.class);
ConstraintViolation<T> constraintViolation = validate.iterator().next();
String firstViolationMessage = constraintViolation.getMessage();
if (!validate.isEmpty() && containsGroup(beanParam, Two.class))
{
validate = validator.validate(beanParam, Two.class);
}
if (!validate.isEmpty() && containsGroup(beanParam, Three.class))
{
validate = validator.validate(beanParam, Three.class);
}
if (!validate.isEmpty())
throw new AppException(firstViolationMessage);
}
private static boolean containsGroup(Object bean, Class<?> groupClazz)
{
// ...
}
}
Is there any other way to use Hibernate-validator to verify the Order in the search method?
As you are trying to make a validation decision based on the state of multiple properties of the Order you might want to explore these 3 options:
Class level constraint
This would mean that you have to create your own constraint annotation (let's say #ValidOrder) and a corresponding ValidOrderValidator
#Target({ METHOD, FIELD, ANNOTATION_TYPE, CONSTRUCTOR, PARAMETER, TYPE_USE })
#Retention(RUNTIME)
#Documented
#Constraint(validatedBy = { ValidOrderValidator.class })
#interface ValidOrder {
String message() default "{message.key}";
Class<?>[] groups() default { };
Class<? extends Payload>[] payload() default { };
}
public class ValidOrderValidator implements ConstraintValidator<ValidOrder, Order> {
#Override
public boolean isValid(Order order, ConstraintValidatorContext constraintValidatorContext) {
//null values are valid
if ( order == null ) {
return true;
}
if (order.getOrderId().isEmpty() && order.getOrderNo().isEmpty() && order.getStatus().isEmpty()) {
if ( order.getStartTime().isEmpty() && order.getEndTime().isEmpty() ) { return false; }
}
return true;
}
}
You can also check this post for more detailed info on how to add new constraints using ServiceLoader.
#ScriptAssert constraint
If your validation logic is relatively simple and you either already have a dependency or are willing to add one for a scripting engine, you can consider using the #ScriptAssert constraint. This is similar to the previous option but you don't need to create annotations and validator implementations you just have to put script logic into this constraint:
#ScriptAssert(lang = "groovy", script = "your validation script logic")
class Order {
//...
}
#AssertTrue constraint
Last but not least, one of the easiest ways to address such validation is to use #AssertTrue constraint on a getter with validation logic inside the Order class:
class Order {
//...
#AssertTrue
public boolean isValidOrder() {
// your validation logic
}
}
Using any of these 3 approaches, you'd be able to make a validation decision based on multiple properties of the Order class.
As for validation group usage - you can leverage using the groups if you need to pass the same Order object into multiple different methods/interfaces where a different set of validation rules need to be applied in each of them. Let's say, in one case, you have to create an order, and half of the fields can be null, but then in the other - you want to update it, and everything should be present.
I have a User entity and an EmailRecipient entity with a -one-to-one relationship. Email recipients can also be created without a related User entity.
My EmailRecipient entity takes the Name, Surname and EmailAddress from the User entity if it is attached/included. I have set up the properties to return the backing-field value if it isn't null before it returns the User properties so I can specify an alternative name if I want.
When saving the EmailRecipient, EF looks at the Name, Surname and EmailAddress properties and populates the EmailRecipient table with the values from the User entity instead of using the backing field values. I would like these fields to remain NULL in the database providing I haven't set a value explicitly i.e. recipient.Name = "new name";
My Question: How can I make EF populate selected database fields based on the backing-field value instead of the property values? Or is there a completely different approach to solving this problem?
EmailRecipient Entity:
public class EmailRecipient
{
private string _name;
public virtual string Name
{
get { return User == null ? _name : _name ?? User.Name; }
set { _name = value; }
}
private string _surname;
public virtual string Surname
{
get { return User == null ? _surname : _surname ?? User.Surname; }
set { _surname = value; }
}
private string _emailAddress;
public virtual string EmailAddress
{
get { return User == null ? _emailAddress : _emailAddress ?? User.EmailAddress; }
set { _emailAddress = value; }
}
[Key, ForeignKey(nameof(UserId))]
public User User { get; protected set; }
public long? UserId { get; set; }
}
After some further digging, I think I have found the solution. Carefully reading Microsoft's EF Core backing fields documentation, it seems that we can explicitly ask EF to read and write to the backing field instead of using the property.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<EmailRecipient>().Property(x => x.Name)
.UsePropertyAccessMode(PropertyAccessMode.Field);
}
The backing field remains null until a new Name is set i.e. recipient.Name = "new name"; and because of this, I can use the recipient.Name property to return the recipient._name (as stored in the database) or if it is null, the recipient.User.Name
Say I have a collection of 'Users', and am happy for their ID to be the generated Firestore documentId, something like:
Users Collection
GENERATED_FIRESTORE_ID1:
name: "User 1 name"
...: etc.
GENERATED_FIRESTORE_ID2:
name: "User 2 name"
...: etc."
and I am adding them, and retrieving them with a custom object (I'm using Android at the moment but the question I guess is more generalistic). I don't want to have an extra "id" field in the document, just use the document.getId() method to get the generated firestore ID.
Is there a correct way to map a POJO to not have an indivual ID field, but when querying set it for application usage? I am doing it using the #Exclude annotation as follows:
public class User {
// as a side question, do I need #exclude on the field or just the getter?
#Exclude
String uId;
String name;
String email;
//... additional fields as normal
public User() {
}
#Exclude
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.displayName = name;
}
//... etc. etc.
}
and then I create the User object and set its ID as follows:
for (DocumentSnapshot doc : documentSnapshots) {
User user = doc.toObject(User.class);
user.setId(doc.getId());
users.add(user );
}
This works fine, and apologies if this is indeed the way, but I'm new to FireStore (am loving it) and want to make sure I'm doing it right. I just wondered if there was a way this would all be automatic, without #Exclude and then manually setting the ID after doc.toObject(MyCustomObject.class)
There is now an Annotation for this -
You could simply use
#DocumentId
String uID
https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/DocumentId.html
This question already has an answer here:
How to change all keys to lowercase when parsing JSON to a JToken
(1 answer)
Closed 5 years ago.
I have a JSON resulting from a mix of system data and user entries, like this :
{
"Properties": [{
"Type": "A",
"Name": "aaa",
"lorem ipsum": 7.1
}, {
"Type": "B",
"Name": "bbb",
"sit amet": "XYZ"
}, {
"Type": "C",
"Name": "ccc",
"abcd": false
}]
}
I need to load it, process it, and save it to MongoDB. I deserialize it to this class :
public class EntityProperty {
public string Name { get; set; }
[JsonExtensionData]
public IDictionary<string, JToken> OtherProperties { get; set; }
public string Type { get; set; }
}
The problem is that MongoDB does not allow dots in key names, but the users can do whatever they want.
So I need a way to save this additional JSON data but I also need to change the key name as it's being processed.
I tried to add [JsonConverter(typeof(CustomValuesConverter))] to the OtherProperties attribute but it seems to ignore it.
Update/Clarification: since the serialization is done by Mongo (I send the objects to the library), I need the extension data names to be fixed during deserialization.
Update
Since the fixing of names must be done during deserialization, you could generalize the LowerCasePropertyNameJsonReader from How to change all keys to lowercase when parsing JSON to a JToken by Brian Rogers to perform the necessary transformation.
First, define the following:
public class PropertyNameMappingJsonReader : JsonTextReader
{
readonly Func<string, string> nameMapper;
public PropertyNameMappingJsonReader(TextReader textReader, Func<string, string> nameMapper)
: base(textReader)
{
if (nameMapper == null)
throw new ArgumentNullException();
this.nameMapper = nameMapper;
}
public override object Value
{
get
{
if (TokenType == JsonToken.PropertyName)
return nameMapper((string)base.Value);
return base.Value;
}
}
}
public static class JsonExtensions
{
public static T DeserializeObject<T>(string json, Func<string, string> nameMapper, JsonSerializerSettings settings = null)
{
using (var textReader = new StringReader(json))
using (var jsonReader = new PropertyNameMappingJsonReader(textReader, nameMapper))
{
return JsonSerializer.CreateDefault(settings).Deserialize<T>(jsonReader);
}
}
}
Then deserialize as follows:
var root = JsonExtensions.DeserializeObject<RootObject>(json, (s) => s.Replace(".", ""));
Or, if you are deserializing from a Stream via a StreamReader you can construct your PropertyNameMappingJsonReader directly from it.
Sample fiddle.
Alternatively, you could also fix the extension data in an [OnDeserialized] callback, but I think this solution is neater because it avoids adding logic to the objects themselves.
Original Answer
Assuming you are using Json.NET 10.0.1 or later, you can create your own custom NamingStrategy, override NamingStrategy.GetExtensionDataName(), and implement the necessary fix.
First, define MongoExtensionDataSettingsNamingStrategy as follows:
public class MongoExtensionDataSettingsNamingStrategy : DefaultNamingStrategy
{
public MongoExtensionDataSettingsNamingStrategy()
: base()
{
this.ProcessExtensionDataNames = true;
}
protected string FixName(string name)
{
return name.Replace(".", "");
}
public override string GetExtensionDataName(string name)
{
if (!ProcessExtensionDataNames)
{
return name;
}
return name.Replace(".", "");
}
}
Then serialize your root object as follows:
var settings = new JsonSerializerSettings
{
ContractResolver = new DefaultContractResolver { NamingStrategy = new MongoExtensionDataSettingsNamingStrategy() },
};
var outputJson = JsonConvert.SerializeObject(root, settings);
Notes:
Here I am inheriting from DefaultNamingStrategy but you could inherit from CamelCaseNamingStrategy if you prefer.
The naming strategy is only invoked to remap extension data names (and dictionary keys) during serialization, not deserialization.
You may want to cache the contract resolver for best performance.
There is no built-in attribute to specify a converter for dictionary keys, as noted in this question. And in any event Json.NET would not use the JsonConverter applied to OtherProperties since the presence of the JsonExtensionData attribute supersedes the converter property.
Alternatively, if it would be more convenient to specify the naming strategy using Json.NET serialization attributes, you will need a slightly different naming strategy. First create:
public class MongoExtensionDataAttributeNamingStrategy : MongoExtensionDataSettingsNamingStrategy
{
public MongoExtensionDataAttributeNamingStrategy()
: base()
{
this.ProcessDictionaryKeys = true;
}
public override string GetDictionaryKey(string key)
{
if (!ProcessDictionaryKeys)
{
return key;
}
return FixName(key);
}
}
And modify EntityProperty as follows:
[JsonObject(NamingStrategyType = typeof(MongoExtensionDataAttributeNamingStrategy))]
public class EntityProperty
{
public string Name { get; set; }
[JsonExtensionData]
public IDictionary<string, JToken> OtherProperties { get; set; }
public string Type { get; set; }
}
The reason for the inconsistency is that, as of Json.NET 10.0.3, DefaultContractResolver uses GetDictionaryKey() when remapping extension data names using a naming strategy that is set via attributes here, but uses GetExtensionDataName() when the naming strategy is set via settings here. I have no explanation for the inconsistency; it feels like a bug.
By default convention, strings properties in an entity model that are not explicitly given a max length are set to nvarchar(max) in the database. We wish to override this convention and give strings a max length of nvarchar(100) if they are not already explicitly set otherwise.
I discovered the PropertyMaxLengthConvention built-in convention, which by its description and documentation would seem to be what I am looking for. However, it either doesn't work or I'm using it wrong or it just doesn't do what I think it does.
I've tried simply adding the convention:
modelBuilder.Conventions.Add(new PropertyMaxLengthConvention(100));
Then I thought maybe the default one is already being used, so I tried removing it first:
modelBuilder.Conventions.Remove<PropertyMaxLengthConvention>();
modelBuilder.Conventions.Add(new PropertyMaxLengthConvention(100));
I even tried explictly adding the convention before and after the default one:
modelBuilder.Conventions.AddBefore<PropertyMaxLengthConvention>(new PropertyMaxLengthConvention(100));
modelBuilder.Conventions.AddAfter<PropertyMaxLengthConvention>(new PropertyMaxLengthConvention(100));
No joy. When I add migrations, the columns are still created as nvarchar(max).
Is there a way to use that convention to do what I want? If not, can I write a custom convention that will default string properties to nvarchar(100) but will still allow me to explicitly set them to a different value including maxlength?
After tracking down the source code for the aforementioned convention, I discovered that it only sets the default max length for properties that are specified to have fixed length. (Bizarre!)
So I took the source code and modified it to create my own convention. Now string properties with unspecified max length will have a default max length instead of being nvarchar(max). The only downside is there doesn't appear to be a way to detect when the IsMaxLength() configuration is explicitly applied. So if I have a column that I do want to have created as nvarchar(max) I can't use IsMaxLength() to do it.
To address this, I created an extension method for StringPropertyConfiguration called ForceMaxLength() that configures the property with HasMaxLength(int.MaxValue) - ordinarily an invalid value, but one for which I can easily test in my custom convention. When I detect it, I simply set the MaxLength back to null and set the IsMaxLength to true and let the property configuration continue as normal.
Here's the custom convention:
using System;
using System.Collections.Generic;
using System.Data.Entity.Core.Metadata.Edm;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.ModelConfiguration.Conventions;
namespace MyProject.CustomConventions
{
public class CustomPropertyMaxLengthConvention : IConceptualModelConvention<EntityType>, IConceptualModelConvention<ComplexType>
{
private const int DefaultLength = 128;
private readonly int length;
public CustomPropertyMaxLengthConvention()
: this(DefaultLength)
{
}
public CustomPropertyMaxLengthConvention(int length)
{
if (length <= 0)
{
throw new ArgumentOutOfRangeException("length", "Invalid Max Length Size");
}
this.length = length;
}
public virtual void Apply(EntityType item, DbModel model)
{
SetLength(item.DeclaredProperties);
}
public virtual void Apply(ComplexType item, DbModel model)
{
SetLength(item.Properties);
}
private void SetLength(IEnumerable<EdmProperty> properties)
{
foreach (EdmProperty current in properties)
{
if (current.IsPrimitiveType)
{
if (current.PrimitiveType == PrimitiveType.GetEdmPrimitiveType(PrimitiveTypeKind.String))
{
SetStringDefaults(current);
}
if (current.PrimitiveType == PrimitiveType.GetEdmPrimitiveType(PrimitiveTypeKind.Binary))
{
SetBinaryDefaults(current);
}
}
}
}
private void SetStringDefaults(EdmProperty property)
{
if (property.IsUnicode == null)
{
property.IsUnicode = true;
}
SetBinaryDefaults(property);
}
private void SetBinaryDefaults(EdmProperty property)
{
if (property.MaxLength == int.MaxValue)
{
property.MaxLength = null;
property.IsMaxLength = true;
}
else if (property.MaxLength == null || !property.IsMaxLength)
{
property.MaxLength = length;
}
}
}
}
Here's the extension method:
using System.Data.Entity.ModelConfiguration.Configuration;
namespace MyProject.Model.Mapping
{
public static class MappingExtensions
{
public static void ForceMaxLength(this StringPropertyConfiguration obj)
{
obj.HasMaxLength(int.MaxValue);
}
}
}
Here's how it's used:
using System.Data.Entity.ModelConfiguration;
namespace MyProject.Model.Mapping
{
public class MyEntityMap : EntityTypeConfiguration<MyEntity>
{
public MyEntityMap()
{
Property(v => v.StringValue).ForceMaxLength();
}
}
}
Or just
public class StringConventions : Convention
{
public StringConventions()
{
this.Properties<string>().Configure(x => x.HasMaxLength(100));
}
}
In EF6 you can use a custom code first convention, but you will also need to have a way to specify nvarchar(max) data type to a string property. So, I came up with the following solution.
/// <summary>
/// Set this attribute to string property to have nvarchar(max) type for db table column.
/// </summary>
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false)]
public sealed class TextAttribute : Attribute
{
}
/// <summary>
/// Changes all string properties without System.ComponentModel.DataAnnotations.StringLength or
/// Text attributes to use string length 16 (i.e nvarchar(16) instead of nvarchar(max) by default).
/// Use TextAttribute to a property to have nvarchar(max) data type.
/// </summary>
public class StringLength16Convention : Convention
{
public StringLength16Convention()
{
Properties<string>()
.Where(p => !p.GetCustomAttributes(false).OfType<DatabaseGeneratedAttribute>().Any())
.Configure(p => p.HasMaxLength(16));
Properties()
.Where(p => p.GetCustomAttributes(false).OfType<TextAttribute>().Any())
.Configure(p => p.IsMaxLength());
}
}
public class CoreContext : DbContext, ICoreContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
//Change string length default behavior.
modelBuilder.Conventions.Add(new StringLength16Convention());
}
}
public class LogMessage
{
[Key]
public Guid Id { get; set; }
[StringLength(25)] // Explicit data length. Result data type is nvarchar(25)
public string Computer { get; set; }
//[StringLength(25)] // Implicit data length. Result data type is nvarchar(16)
public string AgencyName { get; set; }
[Text] // Explicit max data length. Result data type is nvarchar(max)
public string Message { get; set; }
}