Groovy mixin on Spring-MVC controller - spring-mvc

I'm trying to use Groovy mixin transformation on a spring-mvc controller class but Spring does not pickup the request mapping from the mixed in class.
class Reporter {
#RequestMapping("report")
public String doReport() {
"report"
}
}
#Mixin(Reporter)
#Controller
#RequestMapping("/a")
class AController {
#RequestMapping("b")
public String doB() {
"b"
}
}
When this code is run .../a/b url is mapped and works but .../a/report is not mapped and returns HTTP 404. In debug mode, I can access doReport method on AController by duck typing.
This type of request mapping inheritance actually works with Java classes when extends is used; so why it does not work with Groovy's mixin? I'm guessing it's either that mixin transformation does not transfer annotations on the method or that spring's component scanner works before the mixin is processed. Either way, is there a groovier way to achieve this functionality (I don't want AController to extend Reporter for other reasons, so that's not an option) ?

You can find below the responses I got from Guillaume Laforge (Groovy project manager) in Groovy users mailing list.
Hi,
I haven't looked at Spring MVC's implementation, but I suspect that
it's using reflection to find the available methods. And "mixin"
adding methods dynamically, it's not something that's visible through
reflection.
We've had problems with #Mixin over the years, and it's implementation
is far from ideal and bug-ridden despite our efforts to fix it. It's
likely we're going to deprecate it soon, and introduce something like
static mixins or traits, which would then add methods "for real" in
the class, which means such methods like doReport() would be seen by a
framework like Spring MVC.
There are a couple initiatives in that area already, like a prototype
branch from Cédric and also something in Grails which does essentially
that (ie. adding "real" methods through an AST transformation).
Although no firm decision has been made there, it's something we'd
like to investigate and provide soon.
Now back to your question, perhaps you could investigate using
#Delegate? You'd add an #Delegate Reporter reporter property in your
controller class. I don't remember if #Delegate carries the
annotation, I haven't double checked, but if it does, that might be a
good solution for you in the short term.
Guillaume
Using the #Delegate transformation did not work on its own, so I needed another suggestion.
One more try... I recalled us speaking about carrying annotations for
delegated methods... and we actually did implement that already. It's
not on by default, so you have to activate it with a parameter for the
#Delegate annotation:
http://groovy.codehaus.org/gapi/groovy/lang/Delegate.html#methodAnnotations
Could you please try with #Delegate(methodAnnotations = true) ?
And the actual solution is:
class Reporter {
#RequestMapping("report")
public String doReport() {
"report"
}
}
#Controller
#RequestMapping("/a")
class AController {
#Delegate(methodAnnotations = true) private Reporter = new Reporter
#RequestMapping("b")
public String doB() {
"b"
}
}

When you map requests with annotations, what happens is that once the container is started, it scans the classpath, looks for annotated classes and methods, and builds the map internally, instead of you manually writing the deployment descriptor.
The scanner reads methods and annotations from the compiled .class files. Maybe Groovy mixins are implemented in such a way that they are resolved at runtime, so the scanner software can't find them in the compiled bytecode.
To solve this problem, you have to find a way to statically mixin code at compile time, so that the annotated method is actually written to the class file.

Related

Mono.Cecil: Getting Method Reference from delegate passed as Generic Parameter

I'm trying to get an understanding of which concrete types are providing the implementations of interfaces in an IOC (dependency injection) container. My implementation works fine when there are no delegates involved. However, I'm having trouble when a delegate method is passed as the type factory, as I can't get Mono.Cecil to give me the concrete type or a method reference to the factory back. I'm specifically in this case trying to build a component that can work with the IServiceCollection container for .Net ASP.Net REST APIs. I've created a 'minimised' set of code below to make it easy to explain the problem.
Consider the following C# code:
interface IServiceProvider {}
interface IServiceCollection {}
class ServicesCollection : IServiceCollection {}
interface IMongoDBContext {}
class MongoDBContext : IMongoDBContext
{
public MongoDBContext(string configName) {}
}
static class Extensions
{
public static IServiceCollection AddSingleton<TService>(this IServiceCollection services, Func<IServiceProvider, TService> implementationFactory) where TService : class
{
return null;
}
}
class Foo
{
void Bar()
{
IServiceCollection services = new ServicesCollection();
services.AddSingleton<IMongoDBContext>(s => new MongoDBContext("mongodbConfig"));
}
}
When successfully locating the 'services.AddSingleton' as a MethodReference, I'm unable to see any reference to the MongoDBContext class, or its constructor. When printing all the instructions .ToString() I also cannot seem to see anything in the IL - I do see the numbered parameter as !!0, but that doesn't help if I can't resolve it to a type or to the factory method.
Does anyone have any ideas on how to solve this?
Most likely your code is looking in the wrong place.
C# compiler will try to cache the conversion of lambda expression -> delegate.
if you look in sharplab.io you'll see that the compiler is emitting an inner class '<>c' inside your Foo class and in that class it emits the method '<Bar>b__0_0' that will be passed as the delegate (see opcode ldftn).
I don't think there's an easy, non fragile way to find that method.
That said, one option would be to:
Find the AddSingleton() method call
From there start going back to the previous instructions trying to identify which one is pushing the value consumed in 1 (the safest way to do that would be to consider how each instruction you are visiting changes the stack). In the code I've linked, it would be IL_0021 (a dup) of Bar() method.
From there, do something similar to 2, but now looking for the instruction that pushes the method reference (a ldftn) used by the ctor of Func<T, R>; in the code linked, it would be IL_0016.
Now you can inspect the body (in the code linked, Foo/'<>c'::'<Bar>b__0_0')
Note that this implementation has some holes though; for instance, if you call AddSingleton() with a variable/parameter/field as I've done (services.AddSingleton(_func);) you'll need to chase the initialization of that to find the referenced method.
Interestingly, at some point Cecil project did support flow analysis (https://github.com/mono/cecil-old/tree/master/flowanalysis).
If you have access to the source code, I think it would be easier to use Roslyn to analyze it (instead of analyzing the assembly).

Differences between different methods of Symfony service collection

For those of you that are familiar with the building of the Symfony container, do you know what is the differences (if any) between
Tagged service Collector using a Compiler pass
Tagged service Collector using the supported shortcut
Service Locator especially, one that collects services by tags
Specifically, I am wondering about whether these methods differ on making these collected services available sooner or later in the container build process. Also I am wondering about the ‘laziness’ of any of them.
It can certainly be confusing when trying to understand the differences. Keep in mind that the latter two approaches are fairly new. The documentation has not quite caught up. You might actually consider making a new project and doing some experimenting.
Approach 1 is basically an "old school" style. You have:
class MyCollector {
private $handlers = [];
public function addHandler(MyHandler $hamdler) {
$handlers[] = $handler;
# compiler pass
$myCollectorDefinition->addMethodCall('addHandler', [new Reference($handlerServiceId)]);
So basically the container will instantiate MyCollector then explicitly call addHandler for each handler service. In doing so, the handler services will be instantiated unless you do some proxy stuff. So no lazy creation.
The second approach provides a somewhat similar capability but uses an iterable object instead of a plain php array:
class MyCollection {
public function __construct(iterable $handlers)
# services.yaml
App\MyCollection:
arguments:
- !tagged_iterator my.handler
One nice thing about this approach is that the iterable actually ends up connecting to the container via closures and will only instantiate individual handlers when they are actually accessed. So lazy handler creation. Also, there are some variations on how you can specify the key.
I might point out that typically you auto-tag your individual handlers with:
# services.yaml
services:
_instanceof:
App\MyHandlerInterface:
tags: ['my.handler']
So no compiler pass needed.
The third approach is basically the same as the second except that handler services can be accessed individually by an index. This is useful when you need one out of all the possible services. And of course the service selected is only created when you ask for it.
class MyCollection {
public function __construct(ServiceLocator $locator) {
$this->locator = $locator;
}
public function doSomething($handlerKey) {
/** #var MyHandlerInterface $handler */
$handler = $serviceLocator->get($handlerKey);
# services.yaml
App\MyCollection:
arguments: [!tagged_locator { tag: 'app.handler', index_by: 'key' }]
I should point out that in all these cases, the code does not actually know the class of your handler service. Hence the var comment to keep the IDE happy.
There is another approach which I like in which you make your own ServiceLocator and then specify the type of object being located. No need for a var comment. Something like:
class MyHandlerLocator extends ServiceLocator
{
public function get($id) : MyHandlerInterface
{
return parent::get($id);
}
}
The only way I have been able to get this approach to work is a compiler pass. I won't post the code here as it is somewhat outside the scope of the question. But in exchange for a few lines of pass code you get a nice clean custom locator which can also pick up handlers from other bundles.

ASP.Net MVC 6: Recursive Dependency Injection

Still exploring the new ASP.NET MVC5, now with build in DI!
No Problem so far, I can just inject my Handlers (I don't like the Term Service, since this defines to me a Platform-Neutral Interface):
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry(Configuration);
services.Configure<Model.Meta.AppSettings>(Configuration.GetSection("AppSettings"));
services.AddSingleton(typeof(Logic.UserEndPointConfigurationHandler));
services.AddSingleton(typeof(Logic.NetworkHandler));
services.AddMvc();
}
Works fine, also the strongly typed Configuration-Object "AppSettings" works perfectly fine.
Also the Injection in the Controllers works as well.
But now my collaps: I seperated my DataAccess from the Handlers, and obviously I'd like to inject them as well:
public class UserEndPointConfigurationHandler
{
private readonly DataAccess.UserEndPointAccess _access;
public UserEndPointConfigurationHandler(DataAccess.UserEndPointAccess access)
{
_access = access;
}
But bam, UserEndPointAccess can't be resolved. So it seems like even I directly request to DI an Class with a Parameterless-Constructor, I need to register that. For this case, sure I should Interface and register them, but what does that mean for internal helper classes I also inject?
According to the Docs: http://docs.asp.net/en/latest/fundamentals/dependency-injection.html#recommendations and also the examples I found, all people in the world only seem to communicate between Controllers and some Repositories. No Business-Layer and no Classes on different Abstraction-Levels in Assemblies.
Is the Microsoft DI approach something totally differnt than the good ol' Unity one, where I can really decouple as fine granular as I'd like to?
Thanks in advance.
Matthias
Edit #Nightowl: I add my answer here, since it's a bit longer.
First of all, Unity does automatically create Instances, if I request a conecrete Type. This allows me to inject Types I register and Types, like Helper classes etc. I don't need to. This combination allows me to use DI everywhere.
Also in your Example I'd need to know the DataAcces in the WebGui, which is quite thight coupled. Well, I know there are solutions for this via Reflection, but I hoped Microsoft did something in this Topic, but probably that'd mean to big of a change.
Also allows Unity to store Instances or Instructions how to create them, another huge feature, which is missing at the moment.
Probably I'm just to spoiled, what refined DI-Libraries do, probably they also do to much, but at the moment the Microsoft-Implementation is just a huge downgrade according to my Information.
MVC Core follows the the composition root pattern, which is where object graphs are created based off of a set of instructions to instantiate them. I think you are misinterpreting what the IServiceCollection is for. It does not store instances, it stores instructions on how to create instances. The instances aren't actually created until a constructor somewhere in the object graph requests one as a constructor parameter.
So, in short the reason why your service (which you call UserEndPointAccess) is not being instantiated when you request it is because you have not configured the IServiceCollection with instructions on how to create it.
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry(Configuration);
services.Configure<Model.Meta.AppSettings>(Configuration.GetSection("AppSettings"));
services.AddSingleton(typeof(Logic.UserEndPointConfigurationHandler));
services.AddSingleton(typeof(Logic.NetworkHandler));
// Need a way to instantiate UserEndPointAccess via DI.
services.AddSingleton(typeof(DataAccess.UserEndPointAccess));
services.AddMvc();
}
So it seems like even I directly request to DI an Class with a Parameterless-Constructor, I need to register that.
If you are doing DI correctly, each service class will only have a single constructor. If you have more than one it is known as the bastard injection anti-pattern, which essentially means you are tightly coupling your class definition to other classes by adding references to them as foreign defaults.
And yes, you need to register every type you require (that is not part of MVC's default registration). It is like that in Unity as well.

How Can I make Spring-MVC #PropertySources load first, before any other configuration

I have a spring-mvc application that is using java configuration, not xml. There are #Configuration annotations sprinkled through several files. In particular, there is a #PropertySources annotation in one file in a class that implements WebMvcConfigurerAdapter. There are two classes which contain an #Autowired Environment variable. One of these classes is itself a #Configuration class, and I would like it to have access to the fully-loaded Environment at the time it runs.
This isn't happening. When this code executes, the Environment is still null. I've tried reordering the #ComponentScan packages, tried moving the #PropertySources annotation, and nothing has helped me load the property sources in time.
I want this to happen first, before any other configuration.
What must I do to make it so?
UPDATE: After trying many things, including the Order annotation, I find the problem seems to be not so much that the #PropertySources are being loaded too late as that a class I have that is derived from org.springframework.security.web.context.AbstractSecurityWebApplicationInitializer is being loaded too soon. Nothing in my code even references this class but Spring is somehow deciding that this, above all other classes, must be initialized first. No amount of messing around with #Order seems to change this. This in spite of the javadocs, which indicate that the behavior I want is the default:
Caveats
Subclasses of AbstractDispatcherServletInitializer will register their
filters before any other Filter. This means that you will typically
want to ensure subclasses of AbstractDispatcherServletInitializer are
invoked first. This can be done by ensuring the Order or Ordered of
AbstractDispatcherServletInitializer are sooner than subclasses of
AbstractSecurityWebApplicationInitializer.
You can use ContextInitializer, catch the moment when Boot prepared its environment and "inject" your property source programmatically as you want.
If you have a ConfigurableApplicationContext then it provides a ConfigurableEnvironment, which contains the propertySources as a list. If you want your PropertySource to rule all above the others than add it as first. If you want to place it to a special position then you can add it before an other, identified by its "name".
public class ConfigInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext ctx) {
// rule over all the others:
ctx.getEnvironment().getPropertySources().
addFirst(new ResourcePropertySource("file:/etc/conf/your_custom.properties"));
// rule over application.properties but not argument or systemproperties etc
ctx.getEnvironment().getPropertySources().
addBefore("applicationConfig: [classpath:/application.properties]",
new ResourcePropertySource("classpath:your_custom.properties"));
// names can be discovered by placing a breakpoint somewhere here and watch
// ctx.getEnvironment().getPropertySources().propertySourceList members,
// each has a name ...
}
}
You can bring into play this class by:
registering it in the application.properties :
context.initializer.classes=a.b.c.ConfigInitializer
or from application start :
new SpringApplicationBuilder(YourApplication.class).
initializers(new ConfigInitializer()).
run(args);
By the way this is a proper way to inject more of your custom properties, like properties coming from a database or a different host etc...
Not that I am an expert in Java config but maybe:
Spring 4 utilizes this feature in its #PropertySource annotation. To
remind you, #PropertySource annotation provides a mechanism for adding
a source of name/value property pairs to Spring's Environment and it
is used in conjunction with #Configuration classes.
Taken from: http://blog.codeleak.pl/2013/11/how-to-propertysource-annotations-in.html
Are you using #PropertySource or #PropertySources?
Have You tried 'order' property for spring bean initialisation?

How could Reflection not lead to code smells?

I come from low level languages - C++ is the highest level I program in.
Recently I came across Reflection, and I just cannot fathom how it could be used without code smells.
The idea of inspecting a class/method/function during runtime, in my opinion, points to a flaw in design - I think most problems Reflection (tries to) solve could be used with either Polymorphism or proper use of inheritance.
Am I wrong? Do I misunderstand the concept and utility of Reflection?
I am looking for a good explanation of when to utilize Reflection where other solutions will fail or be too cumbersome to implement as well as when NOT to use it.
Please enlighten this low-level lubber.
Reflection is most commonly used to circumvent the static type system, however it also has some interesting use cases:
Let's write an ORM!
If you're familiar with NHibernate or most other ORMs, you write classes which map to tables in your database, something like this:
// used to hook into the ORMs innards
public class ActiveRecordBase
{
public void Save();
}
public class User : ActiveRecordBase
{
public int ID { get; set; }
public string UserName { get; set; }
// ...
}
How do you think the Save() method is written? Well, in most ORMs, the Save method doesn't know what fields are in derived classes, but it can access them using reflection.
Its wholly possible to have the same functionality in a type-safe manner, simply by requiring a user to override a method to copy fields into a datarow object, but that would result in lots of boilerplate code and bloat.
Stubs!
Rhino Mocks is a mocking framework. You pass an interface type into a method, and behind the scenes the framework will dynamically construct and instantiate a mock object implementing the interface.
Sure, a programmer could write the boilerplate code for the mock object by hand, but why would she want to if the framework will do it for her?
Metadata!
We can decorate methods with attributes (metadata), which can serve a variety of purposes:
[FilePermission(Context.AllAccess)] // writes things to a file
[Logging(LogMethod.None)] // logger doesn't log this method
[MethodAccessSecurity(Role="Admin")] // user must be in "Admin" group to invoke method
[Validation(ValidationType.NotNull, "reportName")] // throws exception if reportName is null
public void RunDailyReports(string reportName) { ... }
You need to reflect over the method to inspect the attributes. Most AOP frameworks for .NET use attributes for policy injection.
Sure, you can write the same sort of code inline, but this style is more declarative.
Let's make a dependency framework!
Many IoC containers require some degree of reflection to run properly. For example:
public class FileValidator
{
public FileValidator(ILogger logger) { ... }
}
// client code
var validator = IoC.Resolve<FileValidator>();
Our IoC container will instantiate a file validator and pass an appropriate implementation of ILogger into the constructor. Which implementation? That depends on how its implemented.
Let's say that I gave the name of the assembly and class in a configuration file. The language needs to read name of the class as a string and use reflection to instantiate it.
Unless we know the implementation at compile time, there is no type-safe way to instantiate a class based on its name.
Late Binding / Duck Typing
There are all kinds of reasons why you'd want to read the properties of an object at runtime. I'd pick logging as the simplest use case -- let say you were writing a logger which accepts any object and spits out all of its properties to a file.
public static void Log(string msg, object state) { ... }
You could override the Log method for all possible static types, or you could just use reflection to read the properties instead.
Some languages like OCaml and Scala support statically-checked duck-typing (called structural typing), but sometimes you just don't have compile-time knowledge of an objects interface.
Or as Java programmers know, sometimes the type system will get your way and require you to write all kinds of boilerplate code. There's a well-known article which describes how many design patterns are simplified with dynamic typing.
Occasionally circumventing the type system allows you to refactor your code down much further than is possible with static types, resulting in a little bit cleaner code (preferably hidden behind a programmer friendly API :) ). Many modern static languages are adopting the golden rule "static typing where possible, dynamic typing where necessary", allowing users to switch between static and dynamic code.
Projects such as hibernate (O/R mapping) and StructureMap (dependency injection) would be impossible without Reflection. How would one solve these with polymorphism alone?
What makes these problems so difficult to solve any other way is that the libraries don't directly know anything about your class hierarchy - they can't. And yet they need to know the structure of your classes in order to - for example - map an arbitrary row of data from a database to a property in your class using only the name of the field and the name of your property.
Reflection is particularly useful for mapping problems. The idea of convention over code is becoming more and more popular and you need some type of Reflection to do it.
In .NET 3.5+ you have an alternative, which is to use expression trees. These are strongly-typed, and many problems that were classically solved using Reflection have been re-implemented using lambdas and expression trees (see Fluent NHibernate, Ninject). But keep in mind that not every language supports these kinds of constructs; when they're not available, you're basically stuck with Reflection.
In a way (and I hope I'm not ruffling too many feathers with this), Reflection is very often used as a workaround/hack in Object-Oriented languages for features that come for free in Functional languages. As functional languages become more popular, and/or more OO languages start implementing more functional features (like C#), we will most likely start to see Reflection used less and less. But I suspect it will always still be around, for more conventional applications like plugins (as one of the other responders helpfully pointed out).
Actually, you are already using a reflective system everyday: your computer.
Sure, instead of classes, methods and objects, it has programs and files. Programs create and modify files just like methods create and modify objects. But then programs are files themselves, and some programs even inspect or create other programs!
So, why is it so OK for a Linux install to be reflexive that nobody even thinks about it, and scary for OO programs?
I've seen good usages with custom attributes. Such as a database framework.
[DatabaseColumn("UserID")]
[PrimaryKey]
public Int32 UserID { get; set; }
Reflection can then be used to get further information about these fields. I'm pretty sure LINQ To SQL does something similar...
Other examples include test frameworks...
[Test]
public void TestSomething()
{
Assert.AreEqual(5, 10);
}
Without reflection you often have to repeat yourself a lot.
Consider these scenarios:
Run a set of methods e.g. the testXXX() methods in a test case
Generate a list of properties in a gui builder
Make your classes scriptable
Implement a serialization scheme
You can't typically do these things in C/C++ without repeating the whole list of affected methods and properties somewhere else in the code.
In fact C/C++ programmers often use an Interface description language to expose interfaces at runtime (providing a form of reflection).
Judicious use of reflection and annotations combined with well defined coding conventions can avoids rampant code repetition and increase maintainability.
I think that reflection is one of these mechanisms that are powerful but can be easily abused. You're given the tools to become a "power user" for very specific purposes, but it is not meant to replace proper object oriented design (just as object oriented design is not a solution for everything) or to be used lightly.
Because of the way Java is structured, you are already paying the price of representing your class hierarchy in memory at runtime (compare to C++ where you don't pay any costs unless you use things like virtual methods). There is therefore no cost rationale for blocking it fully.
Reflection is useful for things like serialization - things like Hibernate or digester can use it to determine how to best store objects automatically. Similarly, the JavaBeans model is based on names of methods (a questionable decision, I admit), but you need to be able to inspect what properties are available to build things like visual editors. In more recent versions of Java, reflections is what makes annotations useful - you can write tools and do metaprogramming using these entities that exist in the source code but can be accessible at runtime.
It is possible to go through an entire career as a Java programmer and never have to use reflection because the problems that you deal with don't require it. On the other hand, for certain problems, it is quite necessary.
As mentioned above, reflection is mostly used to implement code that needs to deal with arbitrary objects. ORM mappers, for instance, need to instantiate objects from user-defined classes and fill them with values from database rows. The simplest way to achieve this is through reflection.
Actually, you are partially right, reflection is often a code smell. Most of the time you work with your classes and do not need reflection- if you know your types, you are probably sacrificing type safety, performance, readability and everything that's good in this world, needlessly. However, if you are writing libraries, frameworks or generic utilities, you will probably run into situations best handled with reflection.
This is in Java, which is what I'm familiar with. Other languages offer stuff that can be used to achieve the same goals, but in Java, reflection has clear applications for which it's the best (and sometimes, only) solution.
Unit testing software and frameworks like NUnit use reflection to get a list of tests to execute and executes them. They find all the test suites in a module/assembly/binary (in C# these are represented by classes) and all the tests in those suites (in C# these are methods in a class). NUnit also allows you to mark a test with an expected exception in case you're testing for exception contracts.
Without reflection, you'd need to specify somehow what test suites are available and what tests are available in each suite. Also, things like exceptions would need to be tested manually. C++ unit testing frameworks I've seen have used macros to do this, but some things are still manual and this design is restrictive.
Paul Graham has a great essay that may say it best:
Programs that write programs? When
would you ever want to do that? Not
very often, if you think in Cobol. All
the time, if you think in Lisp. It
would be convenient here if I could
give an example of a powerful macro,
and say there! how about that? But if
I did, it would just look like
gibberish to someone who didn't know
Lisp; there isn't room here to explain
everything you'd need to know to
understand what it meant. In Ansi
Common Lisp I tried to move things
along as fast as I could, and even so
I didn't get to macros until page 160.
concluding with . . .
During the years we worked on Viaweb I
read a lot of job descriptions. A new
competitor seemed to emerge out of the
woodwork every month or so. The first
thing I would do, after checking to
see if they had a live online demo,
was look at their job listings. After
a couple years of this I could tell
which companies to worry about and
which not to. The more of an IT flavor
the job descriptions had, the less
dangerous the company was. The safest
kind were the ones that wanted Oracle
experience. You never had to worry
about those. You were also safe if
they said they wanted C++ or Java
developers. If they wanted Perl or
Python programmers, that would be a
bit frightening-- that's starting to
sound like a company where the
technical side, at least, is run by
real hackers. If I had ever seen a job
posting looking for Lisp hackers, I
would have been really worried.
It is all about rapid development.
var myObject = // Something with quite a few properties.
var props = new Dictionary<string, object>();
foreach (var prop in myObject.GetType().GetProperties())
{
props.Add(prop.Name, prop.GetValue(myObject, null);
}
Plugins are a great example.
Tools are another example - inspector tools, build tools, etc.
I will give an example of a c# solution i was given when i started learning.
It contained classes marked with the [Exercise] attribute, each class contained methods which were not implemented (throwing NotImplementedException). The solution also had unit tests which all failed.
The goal was to implement all the methods and pass all the unit tests.
The solution also had a user interface which it would read all class marked with Excercise, and use reflection to generate a user interface.
We were later asked to implement our own methods, and later still to understand how the user interface 'magically' was changed to include all the new methods we implemented.
Extremely useful, but often not well understood.
The idea behind this was to be able to query any GUI objects properties, to provide them in a GUI to get customized and preconfigured. Now it's uses have been extended and proved to be feasible.
EDIT: spelling
It's very useful for dependency injection. You can explore loaded assemblies types implementing a given interface with a given attribute. Combined with proper configuration files, it proves to be a very powerful and clean way of adding new inherited classes without modifying the client code.
Also, if you are doing an editor that doesn't really care about the underlying model but rather on how the objects are structured directly, ala System.Forms.PropertyGrid)
Without reflection no plugin architecture will work!
Very simple example in Python. Suppose you have a class that have 3 methods:
class SomeClass(object):
def methodA(self):
# some code
def methodB(self):
# some code
def methodC(self):
# some code
Now, in some other class you want to decorate those methods with some additional behaviour (i.e. you want that class to mimic SomeClass, but with an additional functionality).
This is as simple as:
class SomeOtherClass(object):
def __getattr__(self, attr_name):
# do something nice and then call method that caller requested
getattr(self.someclass_instance, attr_name)()
With reflection, you can write a small amount of domain independent code that doesn't need to change often versus writing a lot more domain dependent code that needs to change more frequently (such as when properties are added/removed). With established conventions in your project, you can perform common functions based on the presence of certain properties, attributes, etc. Data transformation of objects between different domains is one example where reflection really comes in handy.
Or a more simple example within a domain, where you want to transform data from the database to data objects without needing to modify the transformation code when properties change, so long as conventions are maintained (in this case matching property names and a specific attribute):
///--------------------------------------------------------------------------------
/// <summary>Transform data from the input data reader into the output object. Each
/// element to be transformed must have the DataElement attribute associated with
/// it.</summary>
///
/// <param name="inputReader">The database reader with the input data.</param>
/// <param name="outputObject">The output object to be populated with the input data.</param>
/// <param name="filterElements">Data elements to filter out of the transformation.</param>
///--------------------------------------------------------------------------------
public static void TransformDataFromDbReader(DbDataReader inputReader, IDataObject outputObject, NameObjectCollection filterElements)
{
try
{
// add all public properties with the DataElement attribute to the output object
foreach (PropertyInfo loopInfo in outputObject.GetType().GetProperties())
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of property to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.PropertyType.UnderlyingSystemType.FullName, false), null);
}
}
}
}
}
}
// add all fields with the DataElement attribute to the output object
foreach (FieldInfo loopInfo in outputObject.GetType().GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance))
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of field to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.FieldType.UnderlyingSystemType.FullName, false));
}
}
}
}
}
}
}
catch (Exception ex)
{
bool reThrow = ExceptionHandler.HandleException(ex);
if (reThrow) throw;
}
}
One usage not yet mentioned: while reflection is generally thought of as "slow", it's possible to use Reflection to improve the efficiency of code which uses interfaces like IEquatable<T> when they exist, and uses other means of checking equality when they do not. In the absence of reflection, code that wanted to test whether two objects were equal would have to either use Object.Equals(Object) or else check at run-time whether an object implemented IEquatable<T> and, if so, cast the object to that interface. In either case, if the type of thing being compared was a value type, at least one boxing operation would be required. Using Reflection makes it possible to have a class EqualityComparer<T> automatically construct a type-specific implementation of IEqualityComparer<T> for any particular type T, with that implementation using IEquatable<T> if it is defined, or using Object.Equals(Object) if it is not. The first time one uses EqualityComparer<T>.Default for any particular type T, the system will have to go through more work than would be required to test, once, whether a particular type implements IEquatable<T>. On the other hand, once that work is done, no more run-time type checking will be required since the system will have produced a custom-built implementation of EqualityComparer<T> for the type in question.

Resources