I'm trying to get an understanding of which concrete types are providing the implementations of interfaces in an IOC (dependency injection) container. My implementation works fine when there are no delegates involved. However, I'm having trouble when a delegate method is passed as the type factory, as I can't get Mono.Cecil to give me the concrete type or a method reference to the factory back. I'm specifically in this case trying to build a component that can work with the IServiceCollection container for .Net ASP.Net REST APIs. I've created a 'minimised' set of code below to make it easy to explain the problem.
Consider the following C# code:
interface IServiceProvider {}
interface IServiceCollection {}
class ServicesCollection : IServiceCollection {}
interface IMongoDBContext {}
class MongoDBContext : IMongoDBContext
{
public MongoDBContext(string configName) {}
}
static class Extensions
{
public static IServiceCollection AddSingleton<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class
{
return null;
}
}
class Foo
{
void Bar()
{
IServiceCollection services = new ServicesCollection();
services.AddSingleton<IMongoDBContext>(s => new MongoDBContext("mongodbConfig"));
}
}
When successfully locating the 'services.AddSingleton' as a MethodReference, I'm unable to see any reference to the MongoDBContext class, or its constructor. When printing all the instructions .ToString() I also cannot seem to see anything in the IL - I do see the numbered parameter as !!0, but that doesn't help if I can't resolve it to a type or to the factory method.
Does anyone have any ideas on how to solve this?
Most likely your code is looking in the wrong place.
C# compiler will try to cache the conversion of lambda expression -> delegate.
if you look in sharplab.io you'll see that the compiler is emitting an inner class '<>c' inside your Foo class and in that class it emits the method '<Bar>b__0_0' that will be passed as the delegate (see opcode ldftn).
I don't think there's an easy, non fragile way to find that method.
That said, one option would be to:
Find the AddSingleton() method call
From there start going back to the previous instructions trying to identify which one is pushing the value consumed in 1 (the safest way to do that would be to consider how each instruction you are visiting changes the stack). In the code I've linked, it would be IL_0021 (a dup) of Bar() method.
From there, do something similar to 2, but now looking for the instruction that pushes the method reference (a ldftn) used by the ctor of Func<T, R>; in the code linked, it would be IL_0016.
Now you can inspect the body (in the code linked, Foo/'<>c'::'<Bar>b__0_0')
Note that this implementation has some holes though; for instance, if you call AddSingleton() with a variable/parameter/field as I've done (services.AddSingleton(_func);) you'll need to chase the initialization of that to find the referenced method.
Interestingly, at some point Cecil project did support flow analysis (https://github.com/mono/cecil-old/tree/master/flowanalysis).
If you have access to the source code, I think it would be easier to use Roslyn to analyze it (instead of analyzing the assembly).
I use SpringBoot for REST web services development and SonarQube for static analysis.
I have a few endpoints in my application that look the following way:
#PostMapping
ResponseEntity<?> addSomething(#RequestBody Some object) {
// some code there
return new ResponseEntity<>(HttpStatus.NO_CONTENT);
}
SonarQube complains about using ResponseEntity with a wildcard, reporting me a Critical issue "Generic wildcard types should not be used in return parameters".
I wonder if I should disable this verification in SonarQube or come up with something different for return type for these cases.
What do you think about it?
#PostMapping
ResponseEntity<Object> addSomething(#RequestBody Some object) {
// some code there
return new ResponseEntity<>(HttpStatus.NO_CONTENT);
}
This will also remove the error. It is still very generic, but it is one of the solutions if you want to return different types based on the outcome. For instance:
#PostMapping
ResponseEntity<Object> addSomething(#RequestBody Some object) {
//Will return ResponseEntity<> with errors
ResponseEntity<Object> errors = mapValidationService(bindingResult);
if (!ObjectUtils.isEmpty(errors)) return errors;
// some code there
return new ResponseEntity<>(HttpStatus.NO_CONTENT);
}
So actually i find the rule pretty self describing:
Using a wildcard as a return type implicitly means that the return value should be considered read-only, but without any way to enforce this contract.
Let's take the example of method returning a "List". Is it possible on this list to add a Dog, a Cat, ... we simply don't know. The consumer of a method should not have to deal with such disruptive questions.
https://sonarcloud.io/organizations/default/rules#rule_key=squid%3AS1452
So Actually in your case, you do not want any kind of Class in there, you specifically want an Serializable-object - for obvious reasons: it should be serialized later on
So instead of using ? it would be more suitable in your case to use Serializable. This is always case dependent, but normally you definitly expect some kind of common interface or base class as a return value. Hence that, the follow up developer, definitly knows what he can expect, and what kind of functionality he definitly can use.
Finally I've removed <?> from return value, so the code looks like the following now:
#PostMapping
ResponseEntity addSomething(#RequestBody Some object) {
// some code there
return new ResponseEntity<>(HttpStatus.NO_CONTENT);
}
SonarQube doesn't complain anymore and code seems a little bit simpler now.
For ASP.NET Web API, I've been working on my own implementation of IHttpControllerActivator and am left wondering when (or why?) to use the HttpRequestMessage extension method "RegisterForDispose".
I see examples like this, and I can see the relevance in it, since IHttpController doesn't inherit IDisposable, and an implementation of IHttpController doesn't guarantee its own dispose logic.
public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
var controller = (IHttpController) _kernel.Get(controllerType);
request.RegisterForDispose( new Release(()=> _kernel.Release(controller)));
return controller;
}
But then I see something like this and begin to wonder:
public IHttpController Create(
HttpRequestMessage request,
HttpControllerDescriptor controllerDescriptor,
Type controllerType)
{
if (controllerType == typeof(RootController))
{
var disposableQuery = new DisposableStatusQuery();
request.RegisterForDispose(disposableQuery);
return new RootController(disposableQuery);
}
return null;
}
In this instance RootController isn't registered for disposal here, presumably because its an ApiController or MVC Controller? - And thus will dispose itself.
The instance of DisposableStatusQuery is registered for disposal since it's a disposable object, but why couldn't the controller dispose of the instance itself? RootController has knowledge of disposableQuery (or rather, it's interface or abstract base), so would know it's disposable.
When would I actually need to use HttpRequestMessage.RegisterForDispose?
One scenario I've found it useful for: for a custom ActionFilter.
Because the Attribute is cached/re-used, items within the Attribute shouldn't rely on the controller to be disposed of (to my understanding - and probably with caveats)... so in order to create a custom attribute which isn't tied to a particular controller type/implementation, you can use this technique to clean up your stuff. In my case, it's for an ambient DbContextScope attribute.
RegisterForDispose it's a hook that will be called when the request is disposed. This is often used along with "some" of the dependency injection containers.
For instance, some containers (like Castle.Windsor) by default will track all dependencies that they resolve. This is according to Windsor ReleasePolicy LifecycledComponentsReleasePolicy which states that it will keep track of all components that were created. In other words your garbage collector will not be able to cleanup if your container still tracks your component. Which will result into memory leaks.
So for example when you define your own IHttpControllerActivator to use it with a dependency injection container it is in order to resolve the concrete controller and all its dependencies. At the end of the request you need to release all the created dependencies by the container otherwise you will end with a big memory leak. You have this opportunity doing it with RegisterForDispose
I use RegisterForDispose with the DI container's. Based on Blog post I have implemented to dispose the container(Nested Container) after each request so that it clears all the objects which i has created.
One may want to hook code around the life cycle of a request that (1) has little to do with controllers and (2) does not subclass the request type.
I would imagine the idiomatic form of such code takes the shape of extension methods on HttpRequestMessage, for example. If the code allocates disposable resources, it would need to hook the disposal code to something. I'm not too familiar with the various extension points of the ASP.NET pipeline, but I suppose hooking code just to dispose of resources at the end of the request processing stage was common enough to justify a dedicated registration mechanism for disposable resources (as opposed to more generally subscribing code to be executed).
Since you're asking, I found a nice example scenario in this sample. Here, an Entity Framework context is set as a property of the request, and must be disposed of properly. While this property is intended to be used by controllers, they're not specific to any controller or controller super-class, so in my opinion this is a very sensible design choice. If you're curious why, this is because these requests are "OData batch requests" and controller actions will be invoked multiple times over the lifetime of each request (once per "operation"). Certain operations are grouped into atomic "changesets" that must be wrapped in transactions at a higher-level than controllers (a dedicated mechanism is used: an ODataBatchHandler, so that the controllers themselves are oblivious to this). Hence, controllers alone are not enough, as one cannot have them dispose of the context themselves in this scenario.
Hope this helps.
I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}
I come from low level languages - C++ is the highest level I program in.
Recently I came across Reflection, and I just cannot fathom how it could be used without code smells.
The idea of inspecting a class/method/function during runtime, in my opinion, points to a flaw in design - I think most problems Reflection (tries to) solve could be used with either Polymorphism or proper use of inheritance.
Am I wrong? Do I misunderstand the concept and utility of Reflection?
I am looking for a good explanation of when to utilize Reflection where other solutions will fail or be too cumbersome to implement as well as when NOT to use it.
Please enlighten this low-level lubber.
Reflection is most commonly used to circumvent the static type system, however it also has some interesting use cases:
Let's write an ORM!
If you're familiar with NHibernate or most other ORMs, you write classes which map to tables in your database, something like this:
// used to hook into the ORMs innards
public class ActiveRecordBase
{
public void Save();
}
public class User : ActiveRecordBase
{
public int ID { get; set; }
public string UserName { get; set; }
// ...
}
How do you think the Save() method is written? Well, in most ORMs, the Save method doesn't know what fields are in derived classes, but it can access them using reflection.
Its wholly possible to have the same functionality in a type-safe manner, simply by requiring a user to override a method to copy fields into a datarow object, but that would result in lots of boilerplate code and bloat.
Stubs!
Rhino Mocks is a mocking framework. You pass an interface type into a method, and behind the scenes the framework will dynamically construct and instantiate a mock object implementing the interface.
Sure, a programmer could write the boilerplate code for the mock object by hand, but why would she want to if the framework will do it for her?
Metadata!
We can decorate methods with attributes (metadata), which can serve a variety of purposes:
[FilePermission(Context.AllAccess)] // writes things to a file
[Logging(LogMethod.None)] // logger doesn't log this method
[MethodAccessSecurity(Role="Admin")] // user must be in "Admin" group to invoke method
[Validation(ValidationType.NotNull, "reportName")] // throws exception if reportName is null
public void RunDailyReports(string reportName) { ... }
You need to reflect over the method to inspect the attributes. Most AOP frameworks for .NET use attributes for policy injection.
Sure, you can write the same sort of code inline, but this style is more declarative.
Let's make a dependency framework!
Many IoC containers require some degree of reflection to run properly. For example:
public class FileValidator
{
public FileValidator(ILogger logger) { ... }
}
// client code
var validator = IoC.Resolve<FileValidator>();
Our IoC container will instantiate a file validator and pass an appropriate implementation of ILogger into the constructor. Which implementation? That depends on how its implemented.
Let's say that I gave the name of the assembly and class in a configuration file. The language needs to read name of the class as a string and use reflection to instantiate it.
Unless we know the implementation at compile time, there is no type-safe way to instantiate a class based on its name.
Late Binding / Duck Typing
There are all kinds of reasons why you'd want to read the properties of an object at runtime. I'd pick logging as the simplest use case -- let say you were writing a logger which accepts any object and spits out all of its properties to a file.
public static void Log(string msg, object state) { ... }
You could override the Log method for all possible static types, or you could just use reflection to read the properties instead.
Some languages like OCaml and Scala support statically-checked duck-typing (called structural typing), but sometimes you just don't have compile-time knowledge of an objects interface.
Or as Java programmers know, sometimes the type system will get your way and require you to write all kinds of boilerplate code. There's a well-known article which describes how many design patterns are simplified with dynamic typing.
Occasionally circumventing the type system allows you to refactor your code down much further than is possible with static types, resulting in a little bit cleaner code (preferably hidden behind a programmer friendly API :) ). Many modern static languages are adopting the golden rule "static typing where possible, dynamic typing where necessary", allowing users to switch between static and dynamic code.
Projects such as hibernate (O/R mapping) and StructureMap (dependency injection) would be impossible without Reflection. How would one solve these with polymorphism alone?
What makes these problems so difficult to solve any other way is that the libraries don't directly know anything about your class hierarchy - they can't. And yet they need to know the structure of your classes in order to - for example - map an arbitrary row of data from a database to a property in your class using only the name of the field and the name of your property.
Reflection is particularly useful for mapping problems. The idea of convention over code is becoming more and more popular and you need some type of Reflection to do it.
In .NET 3.5+ you have an alternative, which is to use expression trees. These are strongly-typed, and many problems that were classically solved using Reflection have been re-implemented using lambdas and expression trees (see Fluent NHibernate, Ninject). But keep in mind that not every language supports these kinds of constructs; when they're not available, you're basically stuck with Reflection.
In a way (and I hope I'm not ruffling too many feathers with this), Reflection is very often used as a workaround/hack in Object-Oriented languages for features that come for free in Functional languages. As functional languages become more popular, and/or more OO languages start implementing more functional features (like C#), we will most likely start to see Reflection used less and less. But I suspect it will always still be around, for more conventional applications like plugins (as one of the other responders helpfully pointed out).
Actually, you are already using a reflective system everyday: your computer.
Sure, instead of classes, methods and objects, it has programs and files. Programs create and modify files just like methods create and modify objects. But then programs are files themselves, and some programs even inspect or create other programs!
So, why is it so OK for a Linux install to be reflexive that nobody even thinks about it, and scary for OO programs?
I've seen good usages with custom attributes. Such as a database framework.
[DatabaseColumn("UserID")]
[PrimaryKey]
public Int32 UserID { get; set; }
Reflection can then be used to get further information about these fields. I'm pretty sure LINQ To SQL does something similar...
Other examples include test frameworks...
[Test]
public void TestSomething()
{
Assert.AreEqual(5, 10);
}
Without reflection you often have to repeat yourself a lot.
Consider these scenarios:
Run a set of methods e.g. the testXXX() methods in a test case
Generate a list of properties in a gui builder
Make your classes scriptable
Implement a serialization scheme
You can't typically do these things in C/C++ without repeating the whole list of affected methods and properties somewhere else in the code.
In fact C/C++ programmers often use an Interface description language to expose interfaces at runtime (providing a form of reflection).
Judicious use of reflection and annotations combined with well defined coding conventions can avoids rampant code repetition and increase maintainability.
I think that reflection is one of these mechanisms that are powerful but can be easily abused. You're given the tools to become a "power user" for very specific purposes, but it is not meant to replace proper object oriented design (just as object oriented design is not a solution for everything) or to be used lightly.
Because of the way Java is structured, you are already paying the price of representing your class hierarchy in memory at runtime (compare to C++ where you don't pay any costs unless you use things like virtual methods). There is therefore no cost rationale for blocking it fully.
Reflection is useful for things like serialization - things like Hibernate or digester can use it to determine how to best store objects automatically. Similarly, the JavaBeans model is based on names of methods (a questionable decision, I admit), but you need to be able to inspect what properties are available to build things like visual editors. In more recent versions of Java, reflections is what makes annotations useful - you can write tools and do metaprogramming using these entities that exist in the source code but can be accessible at runtime.
It is possible to go through an entire career as a Java programmer and never have to use reflection because the problems that you deal with don't require it. On the other hand, for certain problems, it is quite necessary.
As mentioned above, reflection is mostly used to implement code that needs to deal with arbitrary objects. ORM mappers, for instance, need to instantiate objects from user-defined classes and fill them with values from database rows. The simplest way to achieve this is through reflection.
Actually, you are partially right, reflection is often a code smell. Most of the time you work with your classes and do not need reflection- if you know your types, you are probably sacrificing type safety, performance, readability and everything that's good in this world, needlessly. However, if you are writing libraries, frameworks or generic utilities, you will probably run into situations best handled with reflection.
This is in Java, which is what I'm familiar with. Other languages offer stuff that can be used to achieve the same goals, but in Java, reflection has clear applications for which it's the best (and sometimes, only) solution.
Unit testing software and frameworks like NUnit use reflection to get a list of tests to execute and executes them. They find all the test suites in a module/assembly/binary (in C# these are represented by classes) and all the tests in those suites (in C# these are methods in a class). NUnit also allows you to mark a test with an expected exception in case you're testing for exception contracts.
Without reflection, you'd need to specify somehow what test suites are available and what tests are available in each suite. Also, things like exceptions would need to be tested manually. C++ unit testing frameworks I've seen have used macros to do this, but some things are still manual and this design is restrictive.
Paul Graham has a great essay that may say it best:
Programs that write programs? When
would you ever want to do that? Not
very often, if you think in Cobol. All
the time, if you think in Lisp. It
would be convenient here if I could
give an example of a powerful macro,
and say there! how about that? But if
I did, it would just look like
gibberish to someone who didn't know
Lisp; there isn't room here to explain
everything you'd need to know to
understand what it meant. In Ansi
Common Lisp I tried to move things
along as fast as I could, and even so
I didn't get to macros until page 160.
concluding with . . .
During the years we worked on Viaweb I
read a lot of job descriptions. A new
competitor seemed to emerge out of the
woodwork every month or so. The first
thing I would do, after checking to
see if they had a live online demo,
was look at their job listings. After
a couple years of this I could tell
which companies to worry about and
which not to. The more of an IT flavor
the job descriptions had, the less
dangerous the company was. The safest
kind were the ones that wanted Oracle
experience. You never had to worry
about those. You were also safe if
they said they wanted C++ or Java
developers. If they wanted Perl or
Python programmers, that would be a
bit frightening-- that's starting to
sound like a company where the
technical side, at least, is run by
real hackers. If I had ever seen a job
posting looking for Lisp hackers, I
would have been really worried.
It is all about rapid development.
var myObject = // Something with quite a few properties.
var props = new Dictionary<string, object>();
foreach (var prop in myObject.GetType().GetProperties())
{
props.Add(prop.Name, prop.GetValue(myObject, null);
}
Plugins are a great example.
Tools are another example - inspector tools, build tools, etc.
I will give an example of a c# solution i was given when i started learning.
It contained classes marked with the [Exercise] attribute, each class contained methods which were not implemented (throwing NotImplementedException). The solution also had unit tests which all failed.
The goal was to implement all the methods and pass all the unit tests.
The solution also had a user interface which it would read all class marked with Excercise, and use reflection to generate a user interface.
We were later asked to implement our own methods, and later still to understand how the user interface 'magically' was changed to include all the new methods we implemented.
Extremely useful, but often not well understood.
The idea behind this was to be able to query any GUI objects properties, to provide them in a GUI to get customized and preconfigured. Now it's uses have been extended and proved to be feasible.
EDIT: spelling
It's very useful for dependency injection. You can explore loaded assemblies types implementing a given interface with a given attribute. Combined with proper configuration files, it proves to be a very powerful and clean way of adding new inherited classes without modifying the client code.
Also, if you are doing an editor that doesn't really care about the underlying model but rather on how the objects are structured directly, ala System.Forms.PropertyGrid)
Without reflection no plugin architecture will work!
Very simple example in Python. Suppose you have a class that have 3 methods:
class SomeClass(object):
def methodA(self):
# some code
def methodB(self):
# some code
def methodC(self):
# some code
Now, in some other class you want to decorate those methods with some additional behaviour (i.e. you want that class to mimic SomeClass, but with an additional functionality).
This is as simple as:
class SomeOtherClass(object):
def __getattr__(self, attr_name):
# do something nice and then call method that caller requested
getattr(self.someclass_instance, attr_name)()
With reflection, you can write a small amount of domain independent code that doesn't need to change often versus writing a lot more domain dependent code that needs to change more frequently (such as when properties are added/removed). With established conventions in your project, you can perform common functions based on the presence of certain properties, attributes, etc. Data transformation of objects between different domains is one example where reflection really comes in handy.
Or a more simple example within a domain, where you want to transform data from the database to data objects without needing to modify the transformation code when properties change, so long as conventions are maintained (in this case matching property names and a specific attribute):
///--------------------------------------------------------------------------------
/// <summary>Transform data from the input data reader into the output object. Each
/// element to be transformed must have the DataElement attribute associated with
/// it.</summary>
///
/// <param name="inputReader">The database reader with the input data.</param>
/// <param name="outputObject">The output object to be populated with the input data.</param>
/// <param name="filterElements">Data elements to filter out of the transformation.</param>
///--------------------------------------------------------------------------------
public static void TransformDataFromDbReader(DbDataReader inputReader, IDataObject outputObject, NameObjectCollection filterElements)
{
try
{
// add all public properties with the DataElement attribute to the output object
foreach (PropertyInfo loopInfo in outputObject.GetType().GetProperties())
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of property to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.PropertyType.UnderlyingSystemType.FullName, false), null);
}
}
}
}
}
}
// add all fields with the DataElement attribute to the output object
foreach (FieldInfo loopInfo in outputObject.GetType().GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance))
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of field to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.FieldType.UnderlyingSystemType.FullName, false));
}
}
}
}
}
}
}
catch (Exception ex)
{
bool reThrow = ExceptionHandler.HandleException(ex);
if (reThrow) throw;
}
}
One usage not yet mentioned: while reflection is generally thought of as "slow", it's possible to use Reflection to improve the efficiency of code which uses interfaces like IEquatable<T> when they exist, and uses other means of checking equality when they do not. In the absence of reflection, code that wanted to test whether two objects were equal would have to either use Object.Equals(Object) or else check at run-time whether an object implemented IEquatable<T> and, if so, cast the object to that interface. In either case, if the type of thing being compared was a value type, at least one boxing operation would be required. Using Reflection makes it possible to have a class EqualityComparer<T> automatically construct a type-specific implementation of IEqualityComparer<T> for any particular type T, with that implementation using IEquatable<T> if it is defined, or using Object.Equals(Object) if it is not. The first time one uses EqualityComparer<T>.Default for any particular type T, the system will have to go through more work than would be required to test, once, whether a particular type implements IEquatable<T>. On the other hand, once that work is done, no more run-time type checking will be required since the system will have produced a custom-built implementation of EqualityComparer<T> for the type in question.