ASP.NET SessionState mode SQLServer serialization with protobuf-net - asp.net

Problem Background
I have been thinking of ways to optimize the out of state storage of sessions within SQL server and a few I ran across are:
Disable session state on pages that do not require the session. Also, use read-only on pages that are not writing to the session.
In ASP.NET 4.0 use gzip compression option.
Try to keep the amount of data stored in the session to a minimum.
etc.
Right now, I have a single object (a class called SessionObject) stored in the session. The good news is, is that it is completely serializable.
Optimizing using protobuf-net
An additional way I thought might be a good way to optimize the storage of sessions would be to use protocol buffers (protobuf-net) serialization/deserialization instead of the standard BinaryFormatter. I understand I could have all of my objects inherit ISerializable, but I'd like to not create DTO's or clutter up my Domain layer with serialize/deserialize logic.
Any suggestions using protobuf-net with session state SQL server mode would be great!

If the existing session-state code uses BinaryFormatter, then you can cheat by getting protobuf-net to act as an internal proxy for BinaryFormatter, by implementing ISerializable on your root object only:
[ProtoContract]
class SessionObject : ISerializable {
public SessionObject() { }
protected SessionObject(SerializationInfo info, StreamingContext context) {
Serializer.Merge(info, this);
}
void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) {
Serializer.Serialize(info, this);
}
[ProtoMember(1)]
public string Foo { get; set; }
...
}
Notes:
only the root object needs to do this; the any encapsulated objects will be handled automatically by protobuf-net
it will still ad d a little type metadata for the outermost object, but not much
you will need to decorate the members (and encapsulated types) accordingly (this is best done explicit per member; there is an implicit "figure it out yourself" mode, but this is brittle if you add new members)
this will break existing state; changing the serialization mechanism is fundamentally a breaking change
If you want to ditch the type metadata from the root object, you would have to implement your own state provider (I think there is an example on MSDN);
advantage: smaller output
advantage: no need to implement ISerializable on the root object
disadvantage: you need to maintain your own state provider ;p
(all the other points raised above still apply)
Note also that the effectiveness of protobuf-net here will depend a bit on what the data is that you are storing. It should be smaller, but if you have a lot of overwhelmingly large strings it won't be much smaller, as protobuf still uses UTF-8 for strings.
If you do have lots of strings, you might consider additionally using gzip - I wrote a state provider for my last employer that tried gzip, and stored whichever (original or gzip) was the smallest - obviously with a few checks, for example:
don't gzip if it is smaller than [some value]
short-circuit the gzip compression early if the gzip ever exceeds the original
The above can be used in combination with protobuf-net quite happily - and if you are writing a state-provider anyway you can drop the ISerializable etc for maximum performance.
A final option, if you really want would be fore me to add a "compression mode" property to [ProtoContract(..., CompressionMode = ...)]; which:
would only apply for the ISerializable usage (for technical reasons, it doesn't make sense to change the primary layout, but this scenario would be fine)
automatically applies gzip during serialization/deserialization of the above [perhaps with the same checks I mention above]
would mean you don't need to add your own state provider
However, this is something I'd only really want to apply for "v2" (I'm being pretty brutal about bugfix only in v1, so that I can keep things sane).
Let me know if that would be of interest.

Related

Getting IMetadataDetailsProviders to Run More than Once in ASP.NET Core

This is a tricky question which will require some deep knowledge of the ASP.NET Core framework. I'll first explain what is happening in our application in the MVC 3 implementation.
There was a complex requirement which needed to be solved involving the ModelMetaData for our ViewModels on a particular view. This is a highly configurable application. So, for one "Journal Type", a property may be mandatory, whereas for another, the exact same property may be non-mandatory. Moreover, it may be a radio-button for one "Journal Type" and a select list for another. As there was a huge number of combinations, mixing and matching for all these configuration options, it was not practical to create a separate ViewModel type for each and every possible permutation. So, there was one ViewModel type and the ModelMetaData was set on the properties of that type dynamically.
This was done by creating a custom ModelMetadataProvider (by inheriting DataAnnotationsModelMetadataProvider).
Smash-cut to now, where we are upgrading the application and writing the server stuff in ASP.NET Core. I have identified that implementing IDisplayMetadataProvider is the equivalent way of modifying Model Metadata in ASP.NET Core.
The problem is, the framework has caching built into it and any class which implements IDisplayMetadataProvider only runs once. I discovered this while debugging the ASP.NET Core framework and this comment confirms my finding. Our requirement will no longer be met with such caching, as the first time the ViewModel type is accessed, the MetadataDetailsProvider will run and the result will be cached. But, as mentioned above, owing to the highly dynamic configuration, I need it to run prior to every ModelBinding. Otherwise, we will not be able to take advantage of ModelState. The first time that endpoint is hit, the meta-data is set in stone for all future requests.
And we kinda need to leverage that recursive process of going through all the properties using reflection to set the meta-data, as we don't want to have to do that ourselves (a massive endeavour beyond my pay-scale).
So, if anyone thinks there's something in the new Core framework which I have missed, by all means let me know. Even if it is as simple as removing that caching feature of ModelBinders and IDisplayMetadataProviders (that is what I'll be looking into over the next couple of days by going through the ASP.NET source).
Model Metadata is cached due to performance considerations. Class DefaultModelMetadataProvider, which is default implementation of IModelMetadataProvider interface, is responsible for this caching. If your application logic requires that metadata is rebuilt on every request, you should substitute this implementation with your own.
You will make your life easier if you inherit your implementation from DefaultModelMetadataProvider and override bare minimum for achieving your goal. Seems like GetMetadataForType(Type modelType) should be enough:
public class CustomModelMetadataProvider : DefaultModelMetadataProvider
{
public CustomModelMetadataProvider(ICompositeMetadataDetailsProvider detailsProvider)
: base(detailsProvider)
{
}
public CustomModelMetadataProvider(ICompositeMetadataDetailsProvider detailsProvider, IOptions<MvcOptions> optionsAccessor)
: base(detailsProvider, optionsAccessor)
{
}
public override ModelMetadata GetMetadataForType(Type modelType)
{
// Optimization for intensively used System.Object
if (modelType == typeof(object))
{
return base.GetMetadataForType(modelType);
}
var identity = ModelMetadataIdentity.ForType(modelType);
DefaultMetadataDetails details = CreateTypeDetails(identity);
// This part contains the same logic as DefaultModelMetadata.DisplayMetadata property
// See https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/ModelBinding/Metadata/DefaultModelMetadata.cs
var context = new DisplayMetadataProviderContext(identity, details.ModelAttributes);
// Here your implementation of IDisplayMetadataProvider will be called
DetailsProvider.CreateDisplayMetadata(context);
details.DisplayMetadata = context.DisplayMetadata;
return CreateModelMetadata(details);
}
}
To replace DefaultModelMetadataProvider with your CustomModelMetadataProvider add following in ConfigureServices():
services.AddSingleton<IModelMetadataProvider, CustomModelMetadataProvider>();

EF Caching: How to detach objects *completely* before inserting them into HttpRuntime cache?

Some background:
Working with:
.NET 4.5 (thinking of migrating to 4.5.1 if it's painless)
Web Forms
Entity Framework 5, Lazy Loading enabled
Context Per Request
IIS 8
Windows 2012 Datacenter
Point of concern: Memory Usage
Over the project we are currently on, and probably our first bigger project, we're often reading some bigger chunks of data, coming from CSV imports, that are likely to stay the same for very long periods of time.
Unless someone explicitly re-imports the CSV data, they are guaranteed to be the same, this happens in more than one places in our project and similar approach is used for some regular documents that are often being read by the users. We've decided to cache this data in the HttpRuntime cache.
It goes like this, and we pull about 15,000 records consisting mostly of strings.
//myObject and related methods are placeholders
public static List<myObject> GetMyCachedObjects()
{
if (CacheManager.Exists(KeyConstants.keyConstantForMyObject))
{
return CacheManager.Get(KeyConstants.keyConstantForMyObject) as List<myObject>;
}
else
{
List<myObject> myObjectList = framework.objectProvider.GetMyObjects();
CacheManager.Add(KeyConstants.keyConstantForMyObject, myObjectList, true, 5000);
return myObjectList;
}
}
The data retrieving for the above method is very simple and looks like this:
public List<myObject> GetMyObjects()
{
return context.myObjectsTable.AsNoTracking().ToList();
}
There are probably things to be said about the code structure, but that's not my concern at the moment.
I began profiling our project as soon as I saw high memory usage and found many parts where our code could be optimized. I never faced 300 simultaneous users before and our internal tests, done by ourselves were not enough to show the memory issues. I've highlighted and fixed numerous memory leaks but I'd like to understand some Entity Framework related unknowns.
Given the above example, and using ANTS Profiler, I've noticed that 'myObject', and other similar objects, are referencing many System.Data.Entity.DynamicProxies.myObject, additionally there are lots of EntityKeys which hold on to integers. They aren't taking much but their count is relatively high.
For instance 124 instances of 'myObject' are referencing nearly 300 System.Data.Entity.DynamicProxies.
Usually it looks like this, whatever the object is:
Some cache entry, some object I've cached and I now noticed many of them have been detached from dbContext prior caching, the dynamic proxies and the objectContext. I've no idea how to untie them.
My progress:
I did some research and found out that I might be caching something Entity Framework related together with those objects. I've pulled them with NoTracking but there are still those DynamicProxies in the memory which probably hold on to other things as well.
Important: I've observed some live instances of ObjectContext (74), slowly growing, but no instances of my unitOfWork which is holding the dbContext. Those seem to be disposed properly per request basis.
I know how to detach, attach or modify state of an entry from my dbContext, which is wrapped in a unitOfWork, and I often do it. However that doesn't seem to be enough or I am asking for the impossible.
Questions:
Basically, what am I doing wrong with my caching approach when it comes to Entity Framework?
Is the growing number of Object Contexts in the memory a concern, I know the cache will eventually expire but I'm worried of open connections or anything else this context might be holding.
Should I be detaching everything from the context before inserting it into the cache?
If yes, what is the best approach. Especially with List I cannot think of anything else but iterating over the collection and call detach one by one.
Bonus question: About 40% of the consumed memory is free (unallocated), I've no idea why .NET is reserving so much free memory in advance.
You can try using non entity class with specific properties with SELECT method.
public class MyObject2 {
public int ID { get; set; }
public string Name { get; set; }
}
public List<MyObject2> GetObjects(){
return framework.provider.GetObjects().Select(
x=> new MyObject2{
ID = x.ID ,
Name = x.Name
}).ToList();
);
}
Since you will be storing plain c# objects, you will not have to worry about dynamic proxies. You will not have to call detach on anything at all. Also you can store only few properties.
Even if you disable tracking, You will see dynamic proxy because EF uses dynamic class derived from your class which stores extra meta data information (relation e .g. name of foreign key etc to other entities) for the entity.
steps to reduce memory here:
Re new the context, often
Dont try and delete content from the Context. Or Set it to detached.
It hangs around like a fart in a phone box
eg context = new MyContext.
But if possible you should be
using (var context = new Mycontext){ .... }
//short lived contexts is best practice
With your Context you can set Configurations
this.Configuration.LazyLoadingEnabled = false;
this.Configuration.ProxyCreationEnabled = false; //<<<<<<<<<<< THIS one
this.Configuration.AutoDetectChangesEnabled = false;
you can disable proxies if you still feel they are hogging memory.
But that may be unecesseary if you apply using to the context in the first place.
I would redesign the solution a bit:
You are storing all data as a single entry in cache
I would move this and have an entry per cache item.
You are using HTTPRuntime cache
I would use Appfabric Caching, also MS, also free.
Not sure where you are calling that code from
I would Call it on Application start, then all data is in memory when the user needs it
You are using Entity SQL
For this I would use an Entity Data Reader http://msdn.microsoft.com/en-us/library/system.data.entityclient.entitydatareader(v=vs.110).aspx
See also:
http://msdn.microsoft.com/en-us/data/hh949853.aspx

How could Reflection not lead to code smells?

I come from low level languages - C++ is the highest level I program in.
Recently I came across Reflection, and I just cannot fathom how it could be used without code smells.
The idea of inspecting a class/method/function during runtime, in my opinion, points to a flaw in design - I think most problems Reflection (tries to) solve could be used with either Polymorphism or proper use of inheritance.
Am I wrong? Do I misunderstand the concept and utility of Reflection?
I am looking for a good explanation of when to utilize Reflection where other solutions will fail or be too cumbersome to implement as well as when NOT to use it.
Please enlighten this low-level lubber.
Reflection is most commonly used to circumvent the static type system, however it also has some interesting use cases:
Let's write an ORM!
If you're familiar with NHibernate or most other ORMs, you write classes which map to tables in your database, something like this:
// used to hook into the ORMs innards
public class ActiveRecordBase
{
public void Save();
}
public class User : ActiveRecordBase
{
public int ID { get; set; }
public string UserName { get; set; }
// ...
}
How do you think the Save() method is written? Well, in most ORMs, the Save method doesn't know what fields are in derived classes, but it can access them using reflection.
Its wholly possible to have the same functionality in a type-safe manner, simply by requiring a user to override a method to copy fields into a datarow object, but that would result in lots of boilerplate code and bloat.
Stubs!
Rhino Mocks is a mocking framework. You pass an interface type into a method, and behind the scenes the framework will dynamically construct and instantiate a mock object implementing the interface.
Sure, a programmer could write the boilerplate code for the mock object by hand, but why would she want to if the framework will do it for her?
Metadata!
We can decorate methods with attributes (metadata), which can serve a variety of purposes:
[FilePermission(Context.AllAccess)] // writes things to a file
[Logging(LogMethod.None)] // logger doesn't log this method
[MethodAccessSecurity(Role="Admin")] // user must be in "Admin" group to invoke method
[Validation(ValidationType.NotNull, "reportName")] // throws exception if reportName is null
public void RunDailyReports(string reportName) { ... }
You need to reflect over the method to inspect the attributes. Most AOP frameworks for .NET use attributes for policy injection.
Sure, you can write the same sort of code inline, but this style is more declarative.
Let's make a dependency framework!
Many IoC containers require some degree of reflection to run properly. For example:
public class FileValidator
{
public FileValidator(ILogger logger) { ... }
}
// client code
var validator = IoC.Resolve<FileValidator>();
Our IoC container will instantiate a file validator and pass an appropriate implementation of ILogger into the constructor. Which implementation? That depends on how its implemented.
Let's say that I gave the name of the assembly and class in a configuration file. The language needs to read name of the class as a string and use reflection to instantiate it.
Unless we know the implementation at compile time, there is no type-safe way to instantiate a class based on its name.
Late Binding / Duck Typing
There are all kinds of reasons why you'd want to read the properties of an object at runtime. I'd pick logging as the simplest use case -- let say you were writing a logger which accepts any object and spits out all of its properties to a file.
public static void Log(string msg, object state) { ... }
You could override the Log method for all possible static types, or you could just use reflection to read the properties instead.
Some languages like OCaml and Scala support statically-checked duck-typing (called structural typing), but sometimes you just don't have compile-time knowledge of an objects interface.
Or as Java programmers know, sometimes the type system will get your way and require you to write all kinds of boilerplate code. There's a well-known article which describes how many design patterns are simplified with dynamic typing.
Occasionally circumventing the type system allows you to refactor your code down much further than is possible with static types, resulting in a little bit cleaner code (preferably hidden behind a programmer friendly API :) ). Many modern static languages are adopting the golden rule "static typing where possible, dynamic typing where necessary", allowing users to switch between static and dynamic code.
Projects such as hibernate (O/R mapping) and StructureMap (dependency injection) would be impossible without Reflection. How would one solve these with polymorphism alone?
What makes these problems so difficult to solve any other way is that the libraries don't directly know anything about your class hierarchy - they can't. And yet they need to know the structure of your classes in order to - for example - map an arbitrary row of data from a database to a property in your class using only the name of the field and the name of your property.
Reflection is particularly useful for mapping problems. The idea of convention over code is becoming more and more popular and you need some type of Reflection to do it.
In .NET 3.5+ you have an alternative, which is to use expression trees. These are strongly-typed, and many problems that were classically solved using Reflection have been re-implemented using lambdas and expression trees (see Fluent NHibernate, Ninject). But keep in mind that not every language supports these kinds of constructs; when they're not available, you're basically stuck with Reflection.
In a way (and I hope I'm not ruffling too many feathers with this), Reflection is very often used as a workaround/hack in Object-Oriented languages for features that come for free in Functional languages. As functional languages become more popular, and/or more OO languages start implementing more functional features (like C#), we will most likely start to see Reflection used less and less. But I suspect it will always still be around, for more conventional applications like plugins (as one of the other responders helpfully pointed out).
Actually, you are already using a reflective system everyday: your computer.
Sure, instead of classes, methods and objects, it has programs and files. Programs create and modify files just like methods create and modify objects. But then programs are files themselves, and some programs even inspect or create other programs!
So, why is it so OK for a Linux install to be reflexive that nobody even thinks about it, and scary for OO programs?
I've seen good usages with custom attributes. Such as a database framework.
[DatabaseColumn("UserID")]
[PrimaryKey]
public Int32 UserID { get; set; }
Reflection can then be used to get further information about these fields. I'm pretty sure LINQ To SQL does something similar...
Other examples include test frameworks...
[Test]
public void TestSomething()
{
Assert.AreEqual(5, 10);
}
Without reflection you often have to repeat yourself a lot.
Consider these scenarios:
Run a set of methods e.g. the testXXX() methods in a test case
Generate a list of properties in a gui builder
Make your classes scriptable
Implement a serialization scheme
You can't typically do these things in C/C++ without repeating the whole list of affected methods and properties somewhere else in the code.
In fact C/C++ programmers often use an Interface description language to expose interfaces at runtime (providing a form of reflection).
Judicious use of reflection and annotations combined with well defined coding conventions can avoids rampant code repetition and increase maintainability.
I think that reflection is one of these mechanisms that are powerful but can be easily abused. You're given the tools to become a "power user" for very specific purposes, but it is not meant to replace proper object oriented design (just as object oriented design is not a solution for everything) or to be used lightly.
Because of the way Java is structured, you are already paying the price of representing your class hierarchy in memory at runtime (compare to C++ where you don't pay any costs unless you use things like virtual methods). There is therefore no cost rationale for blocking it fully.
Reflection is useful for things like serialization - things like Hibernate or digester can use it to determine how to best store objects automatically. Similarly, the JavaBeans model is based on names of methods (a questionable decision, I admit), but you need to be able to inspect what properties are available to build things like visual editors. In more recent versions of Java, reflections is what makes annotations useful - you can write tools and do metaprogramming using these entities that exist in the source code but can be accessible at runtime.
It is possible to go through an entire career as a Java programmer and never have to use reflection because the problems that you deal with don't require it. On the other hand, for certain problems, it is quite necessary.
As mentioned above, reflection is mostly used to implement code that needs to deal with arbitrary objects. ORM mappers, for instance, need to instantiate objects from user-defined classes and fill them with values from database rows. The simplest way to achieve this is through reflection.
Actually, you are partially right, reflection is often a code smell. Most of the time you work with your classes and do not need reflection- if you know your types, you are probably sacrificing type safety, performance, readability and everything that's good in this world, needlessly. However, if you are writing libraries, frameworks or generic utilities, you will probably run into situations best handled with reflection.
This is in Java, which is what I'm familiar with. Other languages offer stuff that can be used to achieve the same goals, but in Java, reflection has clear applications for which it's the best (and sometimes, only) solution.
Unit testing software and frameworks like NUnit use reflection to get a list of tests to execute and executes them. They find all the test suites in a module/assembly/binary (in C# these are represented by classes) and all the tests in those suites (in C# these are methods in a class). NUnit also allows you to mark a test with an expected exception in case you're testing for exception contracts.
Without reflection, you'd need to specify somehow what test suites are available and what tests are available in each suite. Also, things like exceptions would need to be tested manually. C++ unit testing frameworks I've seen have used macros to do this, but some things are still manual and this design is restrictive.
Paul Graham has a great essay that may say it best:
Programs that write programs? When
would you ever want to do that? Not
very often, if you think in Cobol. All
the time, if you think in Lisp. It
would be convenient here if I could
give an example of a powerful macro,
and say there! how about that? But if
I did, it would just look like
gibberish to someone who didn't know
Lisp; there isn't room here to explain
everything you'd need to know to
understand what it meant. In Ansi
Common Lisp I tried to move things
along as fast as I could, and even so
I didn't get to macros until page 160.
concluding with . . .
During the years we worked on Viaweb I
read a lot of job descriptions. A new
competitor seemed to emerge out of the
woodwork every month or so. The first
thing I would do, after checking to
see if they had a live online demo,
was look at their job listings. After
a couple years of this I could tell
which companies to worry about and
which not to. The more of an IT flavor
the job descriptions had, the less
dangerous the company was. The safest
kind were the ones that wanted Oracle
experience. You never had to worry
about those. You were also safe if
they said they wanted C++ or Java
developers. If they wanted Perl or
Python programmers, that would be a
bit frightening-- that's starting to
sound like a company where the
technical side, at least, is run by
real hackers. If I had ever seen a job
posting looking for Lisp hackers, I
would have been really worried.
It is all about rapid development.
var myObject = // Something with quite a few properties.
var props = new Dictionary<string, object>();
foreach (var prop in myObject.GetType().GetProperties())
{
props.Add(prop.Name, prop.GetValue(myObject, null);
}
Plugins are a great example.
Tools are another example - inspector tools, build tools, etc.
I will give an example of a c# solution i was given when i started learning.
It contained classes marked with the [Exercise] attribute, each class contained methods which were not implemented (throwing NotImplementedException). The solution also had unit tests which all failed.
The goal was to implement all the methods and pass all the unit tests.
The solution also had a user interface which it would read all class marked with Excercise, and use reflection to generate a user interface.
We were later asked to implement our own methods, and later still to understand how the user interface 'magically' was changed to include all the new methods we implemented.
Extremely useful, but often not well understood.
The idea behind this was to be able to query any GUI objects properties, to provide them in a GUI to get customized and preconfigured. Now it's uses have been extended and proved to be feasible.
EDIT: spelling
It's very useful for dependency injection. You can explore loaded assemblies types implementing a given interface with a given attribute. Combined with proper configuration files, it proves to be a very powerful and clean way of adding new inherited classes without modifying the client code.
Also, if you are doing an editor that doesn't really care about the underlying model but rather on how the objects are structured directly, ala System.Forms.PropertyGrid)
Without reflection no plugin architecture will work!
Very simple example in Python. Suppose you have a class that have 3 methods:
class SomeClass(object):
def methodA(self):
# some code
def methodB(self):
# some code
def methodC(self):
# some code
Now, in some other class you want to decorate those methods with some additional behaviour (i.e. you want that class to mimic SomeClass, but with an additional functionality).
This is as simple as:
class SomeOtherClass(object):
def __getattr__(self, attr_name):
# do something nice and then call method that caller requested
getattr(self.someclass_instance, attr_name)()
With reflection, you can write a small amount of domain independent code that doesn't need to change often versus writing a lot more domain dependent code that needs to change more frequently (such as when properties are added/removed). With established conventions in your project, you can perform common functions based on the presence of certain properties, attributes, etc. Data transformation of objects between different domains is one example where reflection really comes in handy.
Or a more simple example within a domain, where you want to transform data from the database to data objects without needing to modify the transformation code when properties change, so long as conventions are maintained (in this case matching property names and a specific attribute):
///--------------------------------------------------------------------------------
/// <summary>Transform data from the input data reader into the output object. Each
/// element to be transformed must have the DataElement attribute associated with
/// it.</summary>
///
/// <param name="inputReader">The database reader with the input data.</param>
/// <param name="outputObject">The output object to be populated with the input data.</param>
/// <param name="filterElements">Data elements to filter out of the transformation.</param>
///--------------------------------------------------------------------------------
public static void TransformDataFromDbReader(DbDataReader inputReader, IDataObject outputObject, NameObjectCollection filterElements)
{
try
{
// add all public properties with the DataElement attribute to the output object
foreach (PropertyInfo loopInfo in outputObject.GetType().GetProperties())
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of property to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.PropertyType.UnderlyingSystemType.FullName, false), null);
}
}
}
}
}
}
// add all fields with the DataElement attribute to the output object
foreach (FieldInfo loopInfo in outputObject.GetType().GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance))
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of field to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.FieldType.UnderlyingSystemType.FullName, false));
}
}
}
}
}
}
}
catch (Exception ex)
{
bool reThrow = ExceptionHandler.HandleException(ex);
if (reThrow) throw;
}
}
One usage not yet mentioned: while reflection is generally thought of as "slow", it's possible to use Reflection to improve the efficiency of code which uses interfaces like IEquatable<T> when they exist, and uses other means of checking equality when they do not. In the absence of reflection, code that wanted to test whether two objects were equal would have to either use Object.Equals(Object) or else check at run-time whether an object implemented IEquatable<T> and, if so, cast the object to that interface. In either case, if the type of thing being compared was a value type, at least one boxing operation would be required. Using Reflection makes it possible to have a class EqualityComparer<T> automatically construct a type-specific implementation of IEqualityComparer<T> for any particular type T, with that implementation using IEquatable<T> if it is defined, or using Object.Equals(Object) if it is not. The first time one uses EqualityComparer<T>.Default for any particular type T, the system will have to go through more work than would be required to test, once, whether a particular type implements IEquatable<T>. On the other hand, once that work is done, no more run-time type checking will be required since the system will have produced a custom-built implementation of EqualityComparer<T> for the type in question.

NHibernate compromising domain objects

I'm writing an ASP.NET MVC application using NHibernate as my ORM. I'm struggling a bit with the design though, and would like some input.
So, my question is where do I put my business/validation logic (e.g., email address requires #, password >= 8 characters, etc...)?
So, which makes the most sense:
Put it on the domain objects themselves, probably in the property setters?
Introduce a service layer above my domain layer and have validators for each domain object in there?
Maintain two sets of domain objects. One dumb set for NHibernate, and another smart set for the business logic (and some sort of adapting layer in between them).
I guess my main concern with putting all the validation on the domain objects used by NHibernate. It seems inefficient to have unnecessary validation checks every time I pull objects out of the database. To be clear, I think this is a real concern since this application will be very demanding (think millions of rows in some tables).
Update:
I removed a line with incorrect information regarding NHibernate.
To clear up a couple of misconceptions:
a) NHib does not require you to map onto properties. Using access strategies you can easily map onto fields. You can also define your own custom strategy if you prefer to use something other than properties or fields.
b) If you do map onto properties, getters and setters do not need to be public. They can be protected or even private.
Having said that, I totally agree that domain object validation makes no sense when you are retrieving an entity from the database. As a result of this, I would go with services that validate data when the user attempts to update an entity.
My current project is exactly the same as yours. Using MVC for the front end and NHibernate for persistence. Currently, my validation is at a service layer(your option 2). But while I am doing the coding I am having feelings that my code is not as clean as I wish. For example
public class EntityService
{
public void SaveEntity(Entity entity)
{
if( entity.Propter1 == something )
{
throw new InvalidDataException();
}
if( entity.Propter2 == somethingElse )
{
throw new InvalidDataException();
}
...
}
}
This makes me feel that the EntityService is a "God Class". It knows way too much about Entity class and I don't like it. To me, it's feels much better to let the Entity classes to worry about themselves. But I also understand your concern of the NHibernate performance issue. So, my suggestion is to implement the validation logic in Setters and use field for NHibernate mapping.

Singleton vs Cache ASP.NET

I have created a Registry class in .NET which is a singleton. Apparently this singleton behaves as if it were kept in the Cache (the singleton object is available to each session). Is this a good practice of should I add this Singleton to the Cache?
+ do I need to wacth out for concurrency problems with the GetInstance() function?
namespace Edu3.Business.Registry
{
public class ExamDTORegistry
{
private static ExamDTORegistry instance;
private Dictionary<int, ExamDTO> examDTODictionary;
private ExamDTORegistry()
{
examDTODictionary = new Dictionary<int, ExamDTO>();
}
public static ExamDTORegistry GetInstance()
{
if (instance == null)
{
instance = new ExamDTORegistry();
}
return instance;
}
}
}
Well, your GetInstance method certainly isn't thread-safe - if two threads call it at the same time, they may well end up with two different instances. I have a page on implementing the singleton pattern, if that helps.
Does your code rely on it being a singleton? Bear in mind that if the AppDomain is reloaded, you'll get a new instance anyway.
I don't really see there being much benefit in putting the object in the cache though. Is there anything you're thinking of in particular?
Despite their presence in GoF singletons are generally considered bad practice. Is there any reason why you wish to have only one instance?
HttpContext.Cache is available to all sessions, but items in the cache can be removed from memory when they expire or if there is memory pressure.
HttpContext.Application is also available to all sessions and is a nice place to store persistent, application-wide objects.
Since you've already created a singleton and it works, I don't see why should use one of the ones built-in singleton collections instead, unless you need the extra functionality that Cache gives you.
Not sure sure what you mean by cache... if you want this cached (as in... keep it in memory so that you don't have to fetch it again from some data store) then yes, you can put it in the cache and it will be global for all users. Session means per user, so I don't think this is what you want.
I think the original question spoke to which was preferred. If you have data that remains static or essentially immutable, then http caching or singleton pattern makes a lot of sense. If the singleton is loaded on application start up then there is no "Threading" issue at all. Once the singleton is in place you will receive the same Instance you requested. The problem with a lot of what I am seeing in actual implementations is that people are using both without fully thinking it out. Why should you expire immutable configuration data? Had one client that cached there data and still created ADO DB objects etc. when last they checked if it was in cache. Effectively both of these solutions will work for you, but to gain any positive effect, make sure you use the cache/singleton. In either case, if your data is not available, both should be refreshed at that moment.
i would make it like:
private static READONLY ExamDTORegistry instance;
then you dont need to check for NULL and its thread safe.

Resources