Iterate ModelBindingContext.ValueProvider - asp.net

I have more then one property I need to grab, that starts with the same prefix but I can only get the exact value by key for ModelBindingContext.ValueProvider. Is there a way to grab multiple ValueProviders or iterate the System.Web.Mvc.DictionaryValueProvider<object>?
var value = bindingContext.ValueProvider.GetValue(propertyDescriptor.Name);
The reason for doing this is a dynamic property called Settings which will bind to json properties below. Right now there is no property called "Enable" on Settings so it doesnt bind normally.
public class Integration
{
public dynamic Settings {get;set;}
}
"Integrations[0].Settings.Enable": "true"
"Integrations[0].Settings.Name": "Will"

Got it
public class DynamicPropertyBinder : PropertyBinderAttribute
{
public override bool BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor)
{
if (propertyDescriptor.PropertyType == typeof(Object))
{
foreach(var valueProvider in bindingContext.ValueProvider as System.Collections.IList)
{
var dictionary = valueProvider as DictionaryValueProvider<object>;
if (dictionary != null)
{
var keys = dictionary.GetKeysFromPrefix($"{bindingContext.ModelName}.{propertyDescriptor.Name}");
if (keys.Any())
{
var expando = new ExpandoObject();
foreach (var key in keys)
{
var keyValue = dictionary.GetValue(key.Value);
if (keyValue != null)
{
AddProperty(expando, key.Key, keyValue.RawValue);
}
}
propertyDescriptor.SetValue(bindingContext.Model, expando);
return true;
}
}
}
}
return false;
}
public static void AddProperty(ExpandoObject expando, string propertyName, object propertyValue)
{
var expandoDict = expando as IDictionary<string, object>;
if (expandoDict.ContainsKey(propertyName))
expandoDict[propertyName] = propertyValue;
else
expandoDict.Add(propertyName, propertyValue);
}
}

This is an old question, but I will post the solution that I've found.
You can get all submitted keys from the request object, and then iterating over them get the actual values:
var keys = controllerContext.RequestContext.HttpContext.Request.Form.AllKeys.ToList();
foreach (var key in keys)
{
var value = bindingContext.ValueProvider.GetValue(key).AttemptedValue;
}

Related

Validate Modified Model Using Annotations in EntityFramwork and ASPNET

I have this class as a part of EF Model:
class Person {
public int Id { get; set; }
[MaxLength(100, ErrorMessage="Name cannot be more than 100 characters")]
public string Name { get; set; }
}
And I have this method in my controller:
public IActionResult ChangeName(int id, string name) {
var person = db.Persons.Find(id);
if(person == null) return NotFound();
person.Name = name;
db.SaveChanges();
return Json(new {result = "Saved Successfully"});
}
Is there any way to validate person after changing the Name property using the annotation MaxLength rather than manually check for it. Becuase sometimes I might have more than one validation and I don't want to examine each one of them. Also, I might change these parameters in the future (e.g. make the max length 200), and that means I have to change it everywhere else.
So is it possible?
Your method works as long as there is one validation error per property. Also, it's quite elaborate. You can use db.GetValidationErrors() to get the same result. One difference is that errors are collected in a collection per property name:
var errors = db.GetValidationErrors()
.SelectMany(devr => devr.ValidationErrors)
.GroupBy(ve => ve.PropertyName)
.ToDictionary(ve => ve.Key, ve => ve.Select(v => v.ErrorMessage));
Okay, I found a solution to my problem, I created a method that takes the model and checks for errors:
private IDictionary<string, string> ValidateModel(Person model)
{
var errors = new Dictionary<string, string>();
foreach (var property in model.GetType().GetProperties())
{
foreach (var attribute in property.GetCustomAttributes())
{
var validationAttribute = attribute as ValidationAttribute;
if(validationAttribute == null) continue;
var value = property.GetValue(model);
if (!validationAttribute.IsValid(value))
{
errors.Add(property.Name, validationAttribute.ErrorMessage);
}
}
}
return errors;
}
UPDATE:
As stated by #Gert Arnold, the method above returns only one validation per property. Below is the fixed version which returns a list of errors for each property
public static IDictionary<string, IList<string>> ValidateModel(Person model)
{
var errors = new Dictionary<string, IList<string>>();
foreach (var property in model.GetType().GetProperties())
{
foreach (var attribute in property.GetCustomAttributes())
{
var validationAttribute = attribute as ValidationAttribute;
if (validationAttribute == null) continue;
var value = property.GetValue(model);
if (validationAttribute.IsValid(value)) continue;
if (!errors.ContainsKey(property.Name))
errors[property.Name] = new List<string>();
errors[property.Name].Add(validationAttribute.ErrorMessage);
}
}
return errors;
}

ASP.NET RC2 - ModelState doesn't validate elements of collection

Let's say that I have simple model with required attribute above property.
public class User
{
[Required]
string Name {get;set;}
string Surname {get;set;}
}
When I POST/PUT only one instance of User and Name is empty it works pretty well. ModelState is not valid and contains error.
When I POST/PUT collection of objects User and in some of them Name is empty then ModelState is valid and it does not contain any validation errors.
Could you tell me what is wrong with it and why it concerns only collections? I noticed same behaviour when I have one object with relation one-many. Then collection within this object also is not validated by ModelState.
I don't want to validate required fields manually, it should work automatically.
You need to create a ActionFilter
public class ModelStateValidActionFilter : IAsyncActionFilter
{
public Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
// Validate ICollection
if (context.ActionArguments.Count == 1 && context.ActionArguments.First().Value.GetType().IsListType())
{
foreach (var arg in (IList)context.ActionArguments.First().Value )
{
var parameters = arg.GetType().GetProperties();
foreach (var parameter in parameters)
{
var argument = context.ActionArguments.GetOrDefault(parameter.Name);
EvaluateValidationAttributes(parameter, argument, context.ModelState);
}
}
}
if (context.ModelState.IsValid)
{
return next();
}
context.Result = new BadRequestObjectResult(context.ModelState);
return Task.CompletedTask;
}
private void EvaluateValidationAttributes(PropertyInfo parameter, object argument, ModelStateDictionary modelState)
{
var validationAttributes = parameter.CustomAttributes;
foreach (var attributeData in validationAttributes)
{
var attributeInstance = parameter.GetCustomAttribute(attributeData.AttributeType);
var validationAttribute = attributeInstance as ValidationAttribute;
if (validationAttribute != null)
{
var isValid = validationAttribute.IsValid(argument);
if (!isValid)
{
modelState.AddModelError(parameter.Name, validationAttribute.FormatErrorMessage(parameter.Name));
}
}
}
}
and add it into your MVC options
services.AddMvc()
.AddMvcOptions(opts =>
{
opts.Filters.Add(new ModelStateValidActionFilter());
}

Event Up-Conversion With Keeping Event-Class Name

NEventStore 3.2.0.0
As far as I found out it is required by NEventStore that old event-types must kept around for event up-conversion.
To keep them deserializing correctly in the future they must have an unique name. It is suggested to call it like EventEVENT_VERSION.
Is there any way to avoid EventV1, EventV2,..., EventVN cluttering up your domain model and simply keep using Event?
What are your strategies?
In a question long, long time ago, an answer was missing...
In the discussion referred in the comments, I came up with an - I would say - elegant solution:
Don't save the type-name but an (versioned) identifier
The identifier is set by an attribute on class-level, i.e.
namespace CurrentEvents
{
[Versioned("EventSomethingHappened", 0)] // still version 0
public class EventSomethingHappened
{
...
}
}
This identifier should get serialized in/beside the payload. In serialized form
"Some.Name.Space.EventSomethingHappened" -> "EventSomethingHappened|0"
When another version of this event is required, the current version is copied in an "legacy" assembly or just in another Namespace and renamed (type-name) to "EventSomethingHappenedV0" - but the Versioned-attribute remains untouched (in this copy)
namespace LegacyEvents
{
[Versioned("EventSomethingHappened", 0)] // still version 0
public class EventSomethingHappenedV0
{
...
}
}
In the new version (at the same place, under the same name) just the version-part of the attribute gets incremented. And that's it!
namespace CurrentEvents
{
[Versioned("EventSomethingHappened", 1)] // new version 1
public class EventSomethingHappened
{
...
}
}
Json.NET supports binders which maps type-identifiers to types and back. Here is a production-ready binder:
public class VersionedSerializationBinder : DefaultSerializationBinder
{
private Dictionary<string, Type> _getImplementationLookup = new Dictionary<string, Type>();
private static Type[] _versionedEvents = null;
protected static Type[] VersionedEvents
{
get
{
if (_versionedEvents == null)
_versionedEvents = AppDomain.CurrentDomain.GetAssemblies()
.Where(x => x.IsDynamic == false)
.SelectMany(x => x.GetExportedTypes()
.Where(y => y.IsAbstract == false &&
y.IsInterface == false))
.Where(x => x.GetCustomAttributes(typeof(VersionedAttribute), false).Any())
.ToArray();
return _versionedEvents;
}
}
public VersionedSerializationBinder()
{
}
private VersionedAttribute GetVersionInformation(Type type)
{
var attr = type.GetCustomAttributes(typeof(VersionedAttribute), false).Cast<VersionedAttribute>().FirstOrDefault();
return attr;
}
public override void BindToName(Type serializedType, out string assemblyName, out string typeName)
{
var versionInfo = GetVersionInformation(serializedType);
if (versionInfo != null)
{
var impl = GetImplementation(versionInfo);
typeName = versionInfo.Identifier + "|" + versionInfo.Revision;
}
else
{
base.BindToName(serializedType, out assemblyName, out typeName);
}
assemblyName = null;
}
private VersionedAttribute GetVersionInformation(string serializedInfo)
{
var strs = serializedInfo.Split(new[] { '|' }, StringSplitOptions.RemoveEmptyEntries);
if (strs.Length != 2)
return null;
return new VersionedAttribute(strs[0], strs[1]);
}
public override Type BindToType(string assemblyName, string typeName)
{
if (typeName.Contains('|'))
{
var type = GetImplementation(GetVersionInformation(typeName));
if (type == null)
throw new InvalidOperationException(string.Format("VersionedEventSerializationBinder: No implementation found for type identifier '{0}'", typeName));
return type;
}
else
{
var versionInfo = GetVersionInformation(typeName + "|0");
if (versionInfo != null)
{
var type = GetImplementation(versionInfo);
if (type != null)
return type;
// else: continue as it is a normal serialized object...
}
}
// resolve assembly name if not in serialized info
if (string.IsNullOrEmpty(assemblyName))
{
Type type;
if (typeName.TryFindType(out type))
{
assemblyName = type.Assembly.GetName().Name;
}
}
return base.BindToType(assemblyName, typeName);
}
private Type GetImplementation(VersionedAttribute attribute)
{
Type eventType = null;
if (_getImplementationLookup.TryGetValue(attribute.Identifier + "|" + attribute.Revision, out eventType) == false)
{
var events = VersionedEvents
.Where(x =>
{
return x.GetCustomAttributes(typeof(VersionedAttribute), false)
.Cast<VersionedAttribute>()
.Where(y =>
y.Revision == attribute.Revision &&
y.Identifier == attribute.Identifier)
.Any();
})
.ToArray();
if (events.Length == 0)
{
eventType = null;
}
else if (events.Length == 1)
{
eventType = events[0];
}
else
{
throw new InvalidOperationException(
string.Format("VersionedEventSerializationBinder: Multiple types have the same VersionedEvent attribute '{0}|{1}':\n{2}",
attribute.Identifier,
attribute.Revision,
string.Join(", ", events.Select(x => x.FullName))));
}
_getImplementationLookup[attribute.Identifier + "|" + attribute.Revision] = eventType;
}
return eventType;
}
}
...and the Versioned-attribute
[AttributeUsage(AttributeTargets.Class)]
public class VersionedAttribute : Attribute
{
public string Revision { get; set; }
public string Identifier { get; set; }
public VersionedAttribute(string identifier, string revision = "0")
{
this.Identifier = identifier;
this.Revision = revision;
}
public VersionedAttribute(string identifier, long revision)
{
this.Identifier = identifier;
this.Revision = revision.ToString();
}
}
At last use the versioned binder like this
JsonSerializer.Create(new JsonSerializerSettings
{
TypeNameHandling = TypeNameHandling.All,
TypeNameAssemblyFormat = FormatterAssemblyStyle.Simple,
Binder = new VersionedSerializationBinder()
});
For a full Json.NET ISerialize-implementation see (an little outdated) gist here:
https://gist.github.com/warappa/6388270

How to map lists with ValueInjector

I am using ASP.NET MVC 3.
Can someone please help me clarify what's happening here:
var person = new PersonRepository().Get();
var personViewModel = new PersonViewModel();
personViewModel.InjectFrom<LoopValueInjection>(person)
.InjectFrom<CountryToLookup>(person);
I have a grid on my Index view. Each row is an instance of a CategoryViewModel. So what I do is to get a list of all the categories and then map each Category to a CategoryViewModel, and then pass this list of CategoryViewModels to the view. Hou would I do a mapping like that?
IEnumerable<Category> categoryList = categoryService.GetAll();
I thought the following would work but it doesn't:
// Mapping
IList<CategoryViewModel> viewModelList = new List<CategoryViewModel>();
viewModelList.InjectFrom(categoryList);
AFAIK value injecter doesn't support automatic collection mapping like AutoMapper but you could use a simple LINQ expression and operate on each element:
IEnumerable<Category> categoryList = categoryService.GetAll();
IList<CategoryViewModel> viewModelList = categoryList
.Select(x => new CategoryViewModel().InjectFrom(x)).Cast<CategoryViewModel>()
.ToList();
//source list
IEnumerable<string> items = new string[] { "1", "2" };
// target list
List<int> converted = new List<int>();
// inject all
converted.InjectFrom(items);
And the extension method:
public static ICollection<TTo> InjectFrom<TFrom, TTo>(this ICollection<TTo> to, IEnumerable<TFrom> from) where TTo : new()
{
foreach (var source in from)
{
var target = new TTo();
target.InjectFrom(source);
to.Add(target);
}
return to;
}
ICollection<T> is the interface that got least features but a Add method.
Update
An example using more proper models:
var persons = new PersonRepository().GetAll();
var personViewModels = new List<PersonViewModel>();
personViewModels.InjectFrom(persons);
Update - Inject from different sources
public static ICollection<TTo> InjectFrom<TFrom, TTo>(this ICollection<TTo> to, params IEnumerable<TFrom>[] sources) where TTo : new()
{
foreach (var from in sources)
{
foreach (var source in from)
{
var target = new TTo();
target.InjectFrom(source);
to.Add(target);
}
}
return to;
}
Usage:
var activeUsers = new PersonRepository().GetActive();
var lockedUsers = new PersonRepository().GetLocked();
var personViewModels = new List<PersonViewModel>();
personViewModels.InjectFrom(activeUsers, lockedUsers);
Use this function definition
public static object InjectCompleteFrom(this object target, object source)
{
if (target.GetType().IsGenericType &&
target.GetType().GetGenericTypeDefinition() != null &&
target.GetType().GetGenericTypeDefinition().GetInterfaces() != null &&
target.GetType().GetGenericTypeDefinition().GetInterfaces()
.Contains(typeof(IEnumerable)) &&
source.GetType().IsGenericType &&
source.GetType().GetGenericTypeDefinition() != null &&
source.GetType().GetGenericTypeDefinition().GetInterfaces() != null &&
source.GetType().GetGenericTypeDefinition().GetInterfaces()
.Contains(typeof(IEnumerable)))
{
var t = target.GetType().GetGenericArguments()[0];
var tlist = typeof(List<>).MakeGenericType(t);
var addMethod = tlist.GetMethod("Add");
foreach (var sourceItem in source as IEnumerable)
{
var e = Activator.CreateInstance(t).InjectFrom<CloneInjection>(sourceItem);
addMethod.Invoke(target, new[] { e });
}
return target;
}
else
{
return target.InjectFrom(source);
}
}
For those like me who prefer shortest notations possible
public static ICollection<TTarget> InjectFromList<TTarget, TOrig>(this ICollection<TTarget> target, ICollection<TOrig> source) where TTarget : new()
{
source.Select(r => new TTarget().InjectFrom(r))
.Cast<TTarget>().ToList().ForEach(e => target.Add(e));
return target;
}
public static ICollection<TTarget> InjectFromList<TTarget, TOrig>(this ICollection<TTarget> target, params ICollection<TOrig>[] sources) where TTarget : new()
{
sources.ToList().ForEach(s => s.ToList().Select(r => new TTarget().InjectFrom(r))
.Cast<TTarget>().ToList().ForEach(e => target.Add(e)));
return target;
}
Create a generic list mapper:
public class ValueMapper
{
public static TResult Map<TResult>(object item) where TResult : class
{
return item == null ? null : Mapper.Map<TResult>(item);
}
public static IEnumerable<TResult> MapList<TResult>(IEnumerable<object> items) where TResult : class
{
return items?.Select(i => Mapper.Map<TResult>(i));
}
}
Now you can reference the ValueMapper class wherever you want, and call both Map and MapList
var mydtos = ValueMapper.MapList<MyDto>(dtos);
var mydto = ValueMapper.Map<MyDto>(dto);

How to Optimize this method

private static void ConvertToUpper(object entity, Hashtable visited)
{
if (entity != null && !visited.ContainsKey(entity))
{
visited.Add(entity, entity);
foreach (PropertyInfo propertyInfo in entity.GetType().GetProperties())
{
if (!propertyInfo.CanRead || !propertyInfo.CanWrite)
continue;
object propertyValue = propertyInfo.GetValue(entity, null);
Type propertyType;
if ((propertyType = propertyInfo.PropertyType) == typeof(string))
{
if (propertyValue != null && !propertyInfo.Name.Contains("password"))
{
propertyInfo.SetValue(entity, ((string)propertyValue).ToUpper(), null);
}
continue;
}
if (!propertyType.IsValueType)
{
IEnumerable enumerable;
if ((enumerable = propertyValue as IEnumerable) != null)
{
foreach (object value in enumerable)
{
ConvertToUpper(value, visited);
}
}
else
{
ConvertToUpper(propertyValue, visited);
}
}
}
}
}
Right now it works fine for objects with lists that are relatively small, but once the list of objects get larger it takes forever. How would i optimize this and also set a limit for a max depth.
Thanks for any help
I didn't profile the following code, but it must be very performant on complex structures.
1) Uses dynamic code generation.
2) Uses type-based cache for generated dynamic delegates.
public class VisitorManager : HashSet<object>
{
delegate void Visitor(VisitorManager manager, object entity);
Dictionary<Type, Visitor> _visitors = new Dictionary<Type, Visitor>();
void ConvertToUpperEnum(IEnumerable entity)
{
// TODO: this can be parallelized, but then we should thread-safe lock the cache
foreach (var obj in entity)
ConvertToUpper(obj);
}
public void ConvertToUpper(object entity)
{
if (entity != null && !Contains(entity))
{
Add(entity);
var visitor = GetCachedVisitor(entity.GetType());
if (visitor != null)
visitor(this, entity);
}
}
Type _lastType;
Visitor _lastVisitor;
Visitor GetCachedVisitor(Type type)
{
if (type == _lastType)
return _lastVisitor;
_lastType = type;
return _lastVisitor = GetVisitor(type);
}
Visitor GetVisitor(Type type)
{
Visitor result;
if (!_visitors.TryGetValue(type, out result))
_visitors[type] = result = BuildVisitor(type);
return result;
}
static MethodInfo _toUpper = typeof(string).GetMethod("ToUpper", new Type[0]);
static MethodInfo _convertToUpper = typeof(VisitorManager).GetMethod("ConvertToUpper", BindingFlags.Instance | BindingFlags.Public);
static MethodInfo _convertToUpperEnum = typeof(VisitorManager).GetMethod("ConvertToUpperEnum", BindingFlags.Instance | BindingFlags.NonPublic);
Visitor BuildVisitor(Type type)
{
var visitorManager = Expression.Parameter(typeof(VisitorManager), "manager");
var entityParam = Expression.Parameter(typeof(object), "entity");
var entityVar = Expression.Variable(type, "e");
var cast = Expression.Assign(entityVar, Expression.Convert(entityParam, type)); // T e = (T)entity;
var statements = new List<Expression>() { cast };
foreach (var prop in type.GetProperties())
{
// if cannot read or cannot write - ignore property
if (!prop.CanRead || !prop.CanWrite) continue;
var propType = prop.PropertyType;
// if property is value type - ignore property
if (propType.IsValueType) continue;
var isString = propType == typeof(string);
// if string type but no password in property name - ignore property
if (isString && !prop.Name.Contains("password"))
continue;
#region e.Prop
var propAccess = Expression.Property(entityVar, prop); // e.Prop
#endregion
#region T value = e.Prop
var value = Expression.Variable(propType, "value");
var assignValue = Expression.Assign(value, propAccess);
#endregion
if (isString)
{
#region if (value != null) e.Prop = value.ToUpper();
var ifThen = Expression.IfThen(Expression.NotEqual(value, Expression.Constant(null, typeof(string))),
Expression.Assign(propAccess, Expression.Call(value, _toUpper)));
#endregion
statements.Add(Expression.Block(new[] { value }, assignValue, ifThen));
}
else
{
#region var i = value as IEnumerable;
var enumerable = Expression.Variable(typeof(IEnumerable), "i");
var assignEnum = Expression.Assign(enumerable, Expression.TypeAs(value, enumerable.Type));
#endregion
#region if (i != null) manager.ConvertToUpperEnum(i); else manager.ConvertToUpper(value);
var ifThenElse = Expression.IfThenElse(Expression.NotEqual(enumerable, Expression.Constant(null)),
Expression.Call(visitorManager, _convertToUpperEnum, enumerable),
Expression.Call(visitorManager, _convertToUpper, value));
#endregion
statements.Add(Expression.Block(new[] { value, enumerable }, assignValue, assignEnum, ifThenElse));
}
}
// no blocks
if (statements.Count <= 1)
return null;
return Expression.Lambda<Visitor>(Expression.Block(new[] { entityVar }, statements), visitorManager, entityParam).Compile();
}
}
It looks pretty lean to me. The only thing I can think of would be to parallelize this. If I get a chance I will try to work something out and edit my answer.
Here is how to limit the depth.
private static void ConvertToUpper(object entity, Hashtable visited, int depth)
{
if (depth > MAX_DEPTH) return;
// Omitted code for brevity.
// Example usage here.
ConvertToUppder(..., ..., depth + 1);
}
What you could do is have a Dictionary with a type as the key and relevant properties as the values. You would then only need to search through the properties once for the ones you are interested in (by the looks of things IEnumerable and string) - after all, the properties the types have aren't going to change (unless you're doing some funky Emit stuff but I'm not too familiar with that)
Once you have this you could simply iterate all the properties in the Dictionary using the objects type as the key.
Somehting like this (I haven't actually tested it but it does complile :) )
private static Dictionary<Type, List<PropertyInfo>> _properties = new Dictionary<Type, List<PropertyInfo>>();
private static void ExtractProperties(List<PropertyInfo> list, Type type)
{
if (type == null || type == typeof(object))
{
return; // We've reached the top
}
// Modify which properties you want here
// This is for Public, Protected, Private
const BindingFlags PropertyFlags = BindingFlags.DeclaredOnly |
BindingFlags.Instance |
BindingFlags.NonPublic |
BindingFlags.Public;
foreach (var property in type.GetProperties(PropertyFlags))
{
if (!property.CanRead || !property.CanWrite)
continue;
if ((property.PropertyType == typeof(string)) ||
(property.PropertyType.GetInterface("IEnumerable") != null))
{
if (!property.Name.Contains("password"))
{
list.Add(property);
}
}
}
// OPTIONAL: Navigate the base type
ExtractProperties(list, type.BaseType);
}
private static void ConvertToUpper(object entity, Hashtable visited)
{
if (entity != null && !visited.ContainsKey(entity))
{
visited.Add(entity, entity);
List<PropertyInfo> properties;
if (!_properties.TryGetValue(entity.GetType(), out properties))
{
properties = new List<PropertyInfo>();
ExtractProperties(properties, entity.GetType());
_properties.Add(entity.GetType(), properties);
}
foreach (PropertyInfo propertyInfo in properties)
{
object propertyValue = propertyInfo.GetValue(entity, null);
Type propertyType = propertyInfo.PropertyType;
if (propertyType == typeof(string))
{
propertyInfo.SetValue(entity, ((string)propertyValue).ToUpper(), null);
}
else // It's IEnumerable
{
foreach (object value in (IEnumerable)propertyValue)
{
ConvertToUpper(value, visited);
}
}
}
}
}
Here is a blog of code that should work to apply the Max Depth limit that Brian Gideon mentioned as well as parallel things a bit. It's not perfect and could be refined a bit since I broke the value types and non-value type properties into 2 linq queries.
private static void ConvertToUpper(object entity, Hashtable visited, int depth)
{
if (entity == null || visited.ContainsKey(entity) || depth > MAX_DEPTH)
{
return;
}
visited.Add(entity, entity);
var properties = from p in entity.GetType().GetProperties()
where p.CanRead &&
p.CanWrite &&
p.PropertyType == typeof(string) &&
!p.Name.Contains("password") &&
p.GetValue(entity, null) != null
select p;
Parallel.ForEach(properties, (p) =>
{
p.SetValue(entity, ((string)p.GetValue(entity, null)).ToUpper(), null);
});
var valProperties = from p in entity.GetType().GetProperties()
where p.CanRead &&
p.CanWrite &&
!p.PropertyType.IsValueType &&
!p.Name.Contains("password") &&
p.GetValue(entity, null) != null
select p;
Parallel.ForEach(valProperties, (p) =>
{
if (p.GetValue(entity, null) as IEnumerable != null)
{
foreach(var value in p.GetValue(entity, null) as IEnumerable)
ConvertToUpper(value, visted, depth +1);
}
else
{
ConvertToUpper(p, visited, depth +1);
}
});
}
There are a couple of immediate issues:
There is repeated evaluation of property information for what I am assuming are the same types.
Reflection is comparatively slow.
Issue 1. can be solved by memoizing property information about types and caching it so it does not have to be re-calculated for each recurring type we see.
Performance of issue 2. can be helped out by using IL code generation and dynamic methods. I grabbed code from here to implement dynamically (and also memoized from point 1.) generated and highly efficient calls for getting and setting property values. Basically IL code is dynamically generated to call set and get for a property and encapsulated in a method wrapper - this bypasses all the reflection steps (and some security checks...). Where the following code refers to "DynamicProperty" I have used the code from the previous link.
This method can also be parallelized as suggested by others, just ensure the "visited" cache and calculated properties cache are synchronized.
private static readonly Dictionary<Type, List<ProperyInfoWrapper>> _typePropertyCache = new Dictionary<Type, List<ProperyInfoWrapper>>();
private class ProperyInfoWrapper
{
public GenericSetter PropertySetter { get; set; }
public GenericGetter PropertyGetter { get; set; }
public bool IsString { get; set; }
public bool IsEnumerable { get; set; }
}
private static void ConvertToUpper(object entity, Hashtable visited)
{
if (entity != null && !visited.Contains(entity))
{
visited.Add(entity, entity);
foreach (ProperyInfoWrapper wrapper in GetMatchingProperties(entity))
{
object propertyValue = wrapper.PropertyGetter(entity);
if(propertyValue == null) continue;
if (wrapper.IsString)
{
wrapper.PropertySetter(entity, (((string)propertyValue).ToUpper()));
continue;
}
if (wrapper.IsEnumerable)
{
IEnumerable enumerable = (IEnumerable)propertyValue;
foreach (object value in enumerable)
{
ConvertToUpper(value, visited);
}
}
else
{
ConvertToUpper(propertyValue, visited);
}
}
}
}
private static IEnumerable<ProperyInfoWrapper> GetMatchingProperties(object entity)
{
List<ProperyInfoWrapper> matchingProperties;
if (!_typePropertyCache.TryGetValue(entity.GetType(), out matchingProperties))
{
matchingProperties = new List<ProperyInfoWrapper>();
foreach (PropertyInfo propertyInfo in entity.GetType().GetProperties())
{
if (!propertyInfo.CanRead || !propertyInfo.CanWrite)
continue;
if (propertyInfo.PropertyType == typeof(string))
{
if (!propertyInfo.Name.Contains("password"))
{
ProperyInfoWrapper wrapper = new ProperyInfoWrapper
{
PropertySetter = DynamicProperty.CreateSetMethod(propertyInfo),
PropertyGetter = DynamicProperty.CreateGetMethod(propertyInfo),
IsString = true,
IsEnumerable = false
};
matchingProperties.Add(wrapper);
continue;
}
}
if (!propertyInfo.PropertyType.IsValueType)
{
object propertyValue = propertyInfo.GetValue(entity, null);
bool isEnumerable = (propertyValue as IEnumerable) != null;
ProperyInfoWrapper wrapper = new ProperyInfoWrapper
{
PropertySetter = DynamicProperty.CreateSetMethod(propertyInfo),
PropertyGetter = DynamicProperty.CreateGetMethod(propertyInfo),
IsString = false,
IsEnumerable = isEnumerable
};
matchingProperties.Add(wrapper);
}
}
_typePropertyCache.Add(entity.GetType(), matchingProperties);
}
return matchingProperties;
}
While your question is about the performance of the code, there is another problem that others seem to miss: Maintainability.
While you might think this is not as important as the performance problem you are having, having code that is more readable and maintainable will make it easier to solve problems with it.
Here is an example of how your code might look like, after a few refactorings:
class HierarchyUpperCaseConverter
{
private HashSet<object> visited = new HashSet<object>();
public static void ConvertToUpper(object entity)
{
new HierarchyUpperCaseConverter_v1().ProcessEntity(entity);
}
private void ProcessEntity(object entity)
{
// Don't process null references.
if (entity == null)
{
return;
}
// Prevent processing types that already have been processed.
if (this.visited.Contains(entity))
{
return;
}
this.visited.Add(entity);
this.ProcessEntity(entity);
}
private void ProcessEntity(object entity)
{
var properties =
this.GetProcessableProperties(entity.GetType());
foreach (var property in properties)
{
this.ProcessEntityProperty(entity, property);
}
}
private IEnumerable<PropertyInfo> GetProcessableProperties(Type type)
{
var properties =
from property in type.GetProperties()
where property.CanRead && property.CanWrite
where !property.PropertyType.IsValueType
where !(property.Name.Contains("password") &&
property.PropertyType == typeof(string))
select property;
return properties;
}
private void ProcessEntityProperty(object entity, PropertyInfo property)
{
object value = property.GetValue(entity, null);
if (value != null)
{
if (value is IEnumerable)
{
this.ProcessCollectionProperty(value as IEnumerable);
}
else if (value is string)
{
this.ProcessStringProperty(entity, property, (string)value);
}
else
{
this.AlterHierarchyToUpper(value);
}
}
}
private void ProcessCollectionProperty(IEnumerable value)
{
foreach (object item in (IEnumerable)value)
{
// Make a recursive call.
this.AlterHierarchyToUpper(item);
}
}
private void ProcessStringProperty(object entity, PropertyInfo property, string value)
{
string upperCaseValue = ConvertToUpperCase(value);
property.SetValue(entity, upperCaseValue, null);
}
private string ConvertToUpperCase(string value)
{
// TODO: ToUpper is culture sensitive.
// Shouldn't we use ToUpperInvariant?
return value.ToUpper();
}
}
While this code is more than twice as long as your code snippet, it is more maintainable. In the process of refactoring your code I even found a possible bug in your code. This bug is a lot harder to spot in your code. In your code you try to convert all string values to upper case but you don't convert string values that are stored in object properties. Look for instance at the following code.
class A
{
public object Value { get; set; }
}
var a = new A() { Value = "Hello" };
Perhaps this is exactly what you wanted, but the string "Hello" is not converted to "HELLO" in your code.
Another thing I like to note is that while the only thing I tried to do is make your code more readable, my refactoring seems about 20% faster.
After I refactored the code I tried to improve performance of it, but I found out that it is particularly hard to improve it. While others try to parallelize the code I have to warn about this. Parallelizing the code isn't as easy as others might let you think. There is some synchronization going on between threads (in the form of the 'visited' collection). Don't forget that writing to a collection is not thread-safe. Using a thread-safe version or locking on it might degrade performance again. You will have to test this.
I also found out that the real performance bottleneck is all the reflection (especially the reading of all the property values). The only way to really speed this up is by hard coding the code operations for each and every type, or as others suggested lightweight code generation. However, this is pretty hard and it is questionable whether it is worth the trouble.
I hope you find my refactorings useful and wish you good luck with improving performance.

Resources