One file per function...really? - apache-flex

I am relatively new to Flex/ActionScript, but I have been using a pattern of creating one file per function in my util package - with the name of the file being the same as the name of the function. Like if the file was convertTime.as:
package util{
public function convertTime(s:String):Date{
...
}
}
This way I can import the function readily by doing:
import util.convertTime;
...
convertTime(...);
I like this way better than importing a class object and then calling the static methods hanging off of it, like this:
import util.Util;
...
Util.convertTime(...);
But, the more I do this, the more files I'll end up with, and it also seems a bit wasteful/silly to put only one function into a file, especially when the function is small. Is there another alternative to this? Or are these two options the only ones I have?
Update: after some research, I've also posted my own answer below.

Yes, these are your two main options for utility libraries. We actually use both of these approaches for our generic utility functions. For a very small set of functions that we feel should actually be builtins (such as map()), we put one function per file, so that we can use the function directly.
For more obscure/specialized utility functions, we don't want to pollute our global namespace so we make them static functions on a utility class. This way, we're sure that when someone references ArrayUtils.intersect(), we know what library intersect() came from, and what roughly it's for (it intersects two arrays).
I would recommend going with the latter route as much as possible, unless you have a function that a) you use very frequently and b) is really obvious what it does at a glance.

I came across some other alternatives after all and thought I'd share them here.
Alternative 1 - use inheritence
This is probably an obvious answer, but is limited. You would put your static methods into a parent class, inherit them to get them in the subclasses. This would only work with classes. Also, because ActionScript is single inheritence, you can only inherit once.
Alternative 2 - Alias the methods
You still write utility functions as static methods hanging off util classes, but you alias them so you can access them with a shorter name, ex:
import mx.binding.utils.BindingUtils;
var bind:Function = BindingUtils.bindProperty;
Now you can just call
bind(...);
rather than than the lengthy
BindingUtils.bindProperty(...);
You can do this within the class scope and the function scope, but not the package scope - because apparently you can only have one visible attribute inside a package. If you do this in the class scope, you will want to make sure it doesn't conflict with your other class attribute names.
Alternative 3 - use include
As described in this flexonrails blog post you can use include to simulate a mixin in ActionScript. An include is different from an import in that all it's doing is copying the entirety of the file you are including from and paste it into the place you are including it at. So, it has completely no handling of namespace issues, you can not reference its full path name afterwards like you can with imports, if you have conflicting names, you are on your own with this. Also unlike import, it creates different copies of the same code. But what you can do with this is put any number of functions in a file, and include them into class or function scope in another file. Ex:
// util/time_utils.as
function convertTime(..){
...
}
function convertDate(..){
...
}
To include:
include 'util/time_util.as'; // this is always a relative path
...
convertTime(...);

# an0nym0usc0ward
OOP is simply the method of consolidating like functions or properties into an object that can be imported and used. It is nothing more that a form of organization for your code, ALL code executes procedurally in the processor in the end, OOP is just organization of sources. What he is doing here may not be OOP as you learn from a book, but it does the exact same thing in the end, and should be treated with the same respect.
Anyone that truly understands OOP wouldn't be naive enough to think that the approved and documented form of OOP is the only possible way to object orient your code.
EDIT: This was supposed to be a comment response to an0nym0usc0ward's rude comment telling him to learn OOP. But I guess I typed it in the wrong box :)

Related

Defining and implementing interfaces in R

I'm interested in defining and inheriting from interfaces in R. By interface, I mean OOP interfaces. I know R supports class extension. This link http://adv-r.had.co.nz/OO-essentials.html gives an example of extending a reference class in R. It defines a NoOverdraftAccount reference class that extends an Account reference class.
Instead of extending the Account reference class, I'd like to be able to define an account interface, IAccount. I would like I'd like NoOverDraftAccount to implement IAccount, such that:
NoOverDraftAccount must implement all methods in IAccount.
NoOverDraftAccount cannot declare any new public methods not already declared in IAccount.
NoOverDraftAccountcan declare private methods and properties.
What's the best way to achieve this?
The closest I've come to an answer was from the question Multiple inheritance for R6 classes. But, the question wasn't focused on interfaces.
Thanks for your time.
I don't think "declarations" make much sense in an interpreted language like R. As there's no compile step there's no way to test if something actually conforms to a declared interface without running a function on the class, something like does_class_follow(class,interface), at some point.
So I think you have to start from scratch - you need to define an interface specification class and write the does_class_follow function.
My first thought was that a class would have to know what interface(s) it conformed to so that the test could introspect this, but perhaps that's wrong and you should have a file of interface definitions and pseudo-declarations that tested everything.
For example, have some file interfaces.R that looks like:
IAccount = Interface(
public = list("deposit","withdraw")
)
Implements(Account, IAccount)
Implements(Account, NoOverDraftAccount)
Then when the package is loaded those Implements functions would run and test the classes against that specification of what an Account interface is. Whether its better to test at load time or to put these sort of things in the ./test/ folder and test them at test time using test_that or another test system is a question...
As you may be aware you'll have to implement this separately for all the OO systems in R that you want to use - S3, S4, R5, ReferenceClasses, R6, proto, R.oo and all the other ones I've forgotten...

How to adjust Capybara finders on a site using css-modules?

In our automated tests, a typical line in our code might look something like:
find('.edit-icon').click
We're on our way to using css-modules in our project and I've been warned that class names may change dramatically. A pretty zany example is this site that uses emojis in its class names (when you inspect the page):
css modules by Glenn Maddern
How might I best prepare for a change this drastic? I imagine many of our specs breaking, but I am a little worried about being unable to write tests at all with this new technology in our project.
Using custom capybara selectors you can abstract away from the actual css query being done and move it to one location. In your comments you mentioned the need to change to a class attribute that begins with a passed in value.
Capybara.add_selector(:class_starts_with) do
css { |locator| "[class^=\"#{locator}\"]"
end
would do that and then can be called as
find(:class_starts_with, 'something')
or if you set Capybara.default_selector = :class_starts_with it would just be
find('something')
Rather than changing default_selector another option would be to just define a helper method like find_by_class_start or something that just calls find with :class_starts_with passed (how Capybara's #find_field, etc work).
Also note the custom selector above would only really work if only one class name was set, if multiple are expected you could change the selector to "[class^=\"#{locator}\"], [class*=\" #{locator}\"]"

Defining Symfony Services to maximize maintainability

I'm working on a big domain, for which maintainability is very important.
There are these general workers called ExcelHandlers that implement ExcelHandlerInterface (more on the interface in the ideas section) and basically get an UploadedFile as their input, upload them wherever they want and return the read data as an associative array. Now I have created this base class ExcelFileHandler which does all of these tasks for all excel files given two arguments:
1. The Directory to upload the file
2. the mapping of the excel columns to the indexes of the associative array.
Some ExcelHandlers might have to extend the ExcelFileHandler and do some more processing, in order to get the associative array of data.
The UploadedFile is always passed to the ExcelHandler from the controller.
Now here is the question. Given the generic structure of the ExcelFileHandler how should I define services for specific ExcelHandlers given that some only differ with the original one in the directory to upload the file and the mapping array.
My Ideas:
1. The first approach involves giving the directory and the mapping as the function arguments to ExcelHandleInterface::handle this will make the prototype something like handle(UploadedFile $file, array $mapping, $dir), $mapping and $dir are given to the function as arguments and passed to the handler by the controllers which has the parameters as constructor injections.
2.1 Defining the prototype of handle to be handle(UploadedFile $file), this would require the ExcelHandlers to have knowledge of $dir and $mapping. $dir will always be injected from the constructor.
2.1.1 Foreach individual ExcelHandler in the application, define a separate class e.g: UserExcelHandler, ProductExcelHandler, .... Which extend the ExcelFileHandler and leaves us again with two choices.
2.1.1.1 inject $mapping from outside. e.g:
// in the child class
public function __construct($dir, $mapping){
parent::__construct($dir, $mapping);
}
2.1.1.2 define $mapping in the constructor of the child class. e.g:
// in the child class
public function __construct($dir){
$mapping = array(0 => 'name', 1 => 'gender');
parent::__construct($dir, $mapping);
}
2.1.2 Not to create a class for each separate ExcelHandler and instead define the ExcelFileHandler as an abstract service and decorate with the right parameters to get the concrete ExcelHandler Service with the desired functionality, obviously ExcelHandlers with custom logic must be defined seperately, and to create a uniform code base, $mapping will always be injected from the Container in this case.
In your experience, what other paths can I take and which ones yield better results in the long term?
First of all, it seams as you've already put two separate things into one.
Uploading a file and reading it's contents are two separate concerns, which can change separately (like you said, $directory and $mapping can change case-by-case). Thus I would suggest to use separate services for uploading and reading the file.
See Single responsibility principle for more information.
Furthermore, due to very similar reasons, I would not recommend base classes at all - I'd prefer composition over inheritance.
Imagine that you have 2 methods in your base class: upload, which stores file to a local directory, and parse, which reads excel file and maps columns to some data structure.
If you need to store file in a remote storage (for example FTP), you need to override upload method. Let's call this service1.
If you need to parse file differently, for example combining data from 2 sheets, you need to override parse method. Let's call this service2.
If you need to override both of these methods, while still being able to get service1 and service2, you're stuck, as you'll need to copy-and-paste the code. There's no easy way to use already written functionality from (1) and (2).
In comparison, if you have interface with upload method and interface with parse method, you can inject those 2 separate services where you need them as you need them. You can mix any implementations of those already available. All of them are already tested and you do not need to write any code - just to configure the services.
One more thing to mention - there is absolutely no need to create (and name) classes by their usage. Name them by their behaviour. For example, if you have ExcelParser, which takes $mapping as an argument to a constructor, no need to call it UserExcelParser if the code itself has nothing to do with users. If you need to parse data from several sheets, just create SheetAwareExcelParser etc., not ProductExcelParser. This way you can reuse the code. Also correct naming lets understand the code more easily.
In practice, I've seen when function or class is named by it's usage, and then it's just used in another place with no renaming, refactoring etc. These cases are really not what you're looking for.
Service names (in another words concrete objects, not classes), can of course be named by their purpose. You just configure them with any required functionality (single vs separate sheets etc.)
To summarize, I would use 2.1.2 above all other of your given options. I would inject $dir and $mapping via DI in any case, as these do not change in runtime - these are configuration.

Utility class or Common class in asp.net

Do anyone knows about the class which has the common function which we generally use while developing web application. I have no idea what you may call it, it may be the utility class or common function class. Just for reference, this class can have some common function like:
Generate Random number
Get the file path
Get the concatinated string
To check the string null or empty
Find controls
The idea is to have the collection of function which we generally use while developing asp.net application.
No idea what you are really asking, but there already are ready-made methods for the tasks you write in various library classes:
Random.Next() or RNGCryptoServiceProvider.GetBytes()
Path.GetDirectoryName()
String.Concat() or simply x + y
String.IsNullOrEmpty()
Control.FindControl()
Gotta love the intarwebs - An endless stream of people eager to criticize your style while completely failing to address the obvious "toy" question. ;)
Chris, you want to inherit all your individual page classes from a common base class, which itself inherits from Page. That will let you put all your shared functionality in a single place, without needing to duplicate it in every page.
In your example it looks like utility class - it is set of static functions.
But I think that you should group it in few different classes rather than put all methods in one class - you shouldn't mix UI functions(6) with string functions(3,4), IO functions (2) and math(1).
As Mormegil said - those functions exists in framework, but if you want to create your own implementations then I think that for part of your function the best solution is to create extension method.

How could Reflection not lead to code smells?

I come from low level languages - C++ is the highest level I program in.
Recently I came across Reflection, and I just cannot fathom how it could be used without code smells.
The idea of inspecting a class/method/function during runtime, in my opinion, points to a flaw in design - I think most problems Reflection (tries to) solve could be used with either Polymorphism or proper use of inheritance.
Am I wrong? Do I misunderstand the concept and utility of Reflection?
I am looking for a good explanation of when to utilize Reflection where other solutions will fail or be too cumbersome to implement as well as when NOT to use it.
Please enlighten this low-level lubber.
Reflection is most commonly used to circumvent the static type system, however it also has some interesting use cases:
Let's write an ORM!
If you're familiar with NHibernate or most other ORMs, you write classes which map to tables in your database, something like this:
// used to hook into the ORMs innards
public class ActiveRecordBase
{
public void Save();
}
public class User : ActiveRecordBase
{
public int ID { get; set; }
public string UserName { get; set; }
// ...
}
How do you think the Save() method is written? Well, in most ORMs, the Save method doesn't know what fields are in derived classes, but it can access them using reflection.
Its wholly possible to have the same functionality in a type-safe manner, simply by requiring a user to override a method to copy fields into a datarow object, but that would result in lots of boilerplate code and bloat.
Stubs!
Rhino Mocks is a mocking framework. You pass an interface type into a method, and behind the scenes the framework will dynamically construct and instantiate a mock object implementing the interface.
Sure, a programmer could write the boilerplate code for the mock object by hand, but why would she want to if the framework will do it for her?
Metadata!
We can decorate methods with attributes (metadata), which can serve a variety of purposes:
[FilePermission(Context.AllAccess)] // writes things to a file
[Logging(LogMethod.None)] // logger doesn't log this method
[MethodAccessSecurity(Role="Admin")] // user must be in "Admin" group to invoke method
[Validation(ValidationType.NotNull, "reportName")] // throws exception if reportName is null
public void RunDailyReports(string reportName) { ... }
You need to reflect over the method to inspect the attributes. Most AOP frameworks for .NET use attributes for policy injection.
Sure, you can write the same sort of code inline, but this style is more declarative.
Let's make a dependency framework!
Many IoC containers require some degree of reflection to run properly. For example:
public class FileValidator
{
public FileValidator(ILogger logger) { ... }
}
// client code
var validator = IoC.Resolve<FileValidator>();
Our IoC container will instantiate a file validator and pass an appropriate implementation of ILogger into the constructor. Which implementation? That depends on how its implemented.
Let's say that I gave the name of the assembly and class in a configuration file. The language needs to read name of the class as a string and use reflection to instantiate it.
Unless we know the implementation at compile time, there is no type-safe way to instantiate a class based on its name.
Late Binding / Duck Typing
There are all kinds of reasons why you'd want to read the properties of an object at runtime. I'd pick logging as the simplest use case -- let say you were writing a logger which accepts any object and spits out all of its properties to a file.
public static void Log(string msg, object state) { ... }
You could override the Log method for all possible static types, or you could just use reflection to read the properties instead.
Some languages like OCaml and Scala support statically-checked duck-typing (called structural typing), but sometimes you just don't have compile-time knowledge of an objects interface.
Or as Java programmers know, sometimes the type system will get your way and require you to write all kinds of boilerplate code. There's a well-known article which describes how many design patterns are simplified with dynamic typing.
Occasionally circumventing the type system allows you to refactor your code down much further than is possible with static types, resulting in a little bit cleaner code (preferably hidden behind a programmer friendly API :) ). Many modern static languages are adopting the golden rule "static typing where possible, dynamic typing where necessary", allowing users to switch between static and dynamic code.
Projects such as hibernate (O/R mapping) and StructureMap (dependency injection) would be impossible without Reflection. How would one solve these with polymorphism alone?
What makes these problems so difficult to solve any other way is that the libraries don't directly know anything about your class hierarchy - they can't. And yet they need to know the structure of your classes in order to - for example - map an arbitrary row of data from a database to a property in your class using only the name of the field and the name of your property.
Reflection is particularly useful for mapping problems. The idea of convention over code is becoming more and more popular and you need some type of Reflection to do it.
In .NET 3.5+ you have an alternative, which is to use expression trees. These are strongly-typed, and many problems that were classically solved using Reflection have been re-implemented using lambdas and expression trees (see Fluent NHibernate, Ninject). But keep in mind that not every language supports these kinds of constructs; when they're not available, you're basically stuck with Reflection.
In a way (and I hope I'm not ruffling too many feathers with this), Reflection is very often used as a workaround/hack in Object-Oriented languages for features that come for free in Functional languages. As functional languages become more popular, and/or more OO languages start implementing more functional features (like C#), we will most likely start to see Reflection used less and less. But I suspect it will always still be around, for more conventional applications like plugins (as one of the other responders helpfully pointed out).
Actually, you are already using a reflective system everyday: your computer.
Sure, instead of classes, methods and objects, it has programs and files. Programs create and modify files just like methods create and modify objects. But then programs are files themselves, and some programs even inspect or create other programs!
So, why is it so OK for a Linux install to be reflexive that nobody even thinks about it, and scary for OO programs?
I've seen good usages with custom attributes. Such as a database framework.
[DatabaseColumn("UserID")]
[PrimaryKey]
public Int32 UserID { get; set; }
Reflection can then be used to get further information about these fields. I'm pretty sure LINQ To SQL does something similar...
Other examples include test frameworks...
[Test]
public void TestSomething()
{
Assert.AreEqual(5, 10);
}
Without reflection you often have to repeat yourself a lot.
Consider these scenarios:
Run a set of methods e.g. the testXXX() methods in a test case
Generate a list of properties in a gui builder
Make your classes scriptable
Implement a serialization scheme
You can't typically do these things in C/C++ without repeating the whole list of affected methods and properties somewhere else in the code.
In fact C/C++ programmers often use an Interface description language to expose interfaces at runtime (providing a form of reflection).
Judicious use of reflection and annotations combined with well defined coding conventions can avoids rampant code repetition and increase maintainability.
I think that reflection is one of these mechanisms that are powerful but can be easily abused. You're given the tools to become a "power user" for very specific purposes, but it is not meant to replace proper object oriented design (just as object oriented design is not a solution for everything) or to be used lightly.
Because of the way Java is structured, you are already paying the price of representing your class hierarchy in memory at runtime (compare to C++ where you don't pay any costs unless you use things like virtual methods). There is therefore no cost rationale for blocking it fully.
Reflection is useful for things like serialization - things like Hibernate or digester can use it to determine how to best store objects automatically. Similarly, the JavaBeans model is based on names of methods (a questionable decision, I admit), but you need to be able to inspect what properties are available to build things like visual editors. In more recent versions of Java, reflections is what makes annotations useful - you can write tools and do metaprogramming using these entities that exist in the source code but can be accessible at runtime.
It is possible to go through an entire career as a Java programmer and never have to use reflection because the problems that you deal with don't require it. On the other hand, for certain problems, it is quite necessary.
As mentioned above, reflection is mostly used to implement code that needs to deal with arbitrary objects. ORM mappers, for instance, need to instantiate objects from user-defined classes and fill them with values from database rows. The simplest way to achieve this is through reflection.
Actually, you are partially right, reflection is often a code smell. Most of the time you work with your classes and do not need reflection- if you know your types, you are probably sacrificing type safety, performance, readability and everything that's good in this world, needlessly. However, if you are writing libraries, frameworks or generic utilities, you will probably run into situations best handled with reflection.
This is in Java, which is what I'm familiar with. Other languages offer stuff that can be used to achieve the same goals, but in Java, reflection has clear applications for which it's the best (and sometimes, only) solution.
Unit testing software and frameworks like NUnit use reflection to get a list of tests to execute and executes them. They find all the test suites in a module/assembly/binary (in C# these are represented by classes) and all the tests in those suites (in C# these are methods in a class). NUnit also allows you to mark a test with an expected exception in case you're testing for exception contracts.
Without reflection, you'd need to specify somehow what test suites are available and what tests are available in each suite. Also, things like exceptions would need to be tested manually. C++ unit testing frameworks I've seen have used macros to do this, but some things are still manual and this design is restrictive.
Paul Graham has a great essay that may say it best:
Programs that write programs? When
would you ever want to do that? Not
very often, if you think in Cobol. All
the time, if you think in Lisp. It
would be convenient here if I could
give an example of a powerful macro,
and say there! how about that? But if
I did, it would just look like
gibberish to someone who didn't know
Lisp; there isn't room here to explain
everything you'd need to know to
understand what it meant. In Ansi
Common Lisp I tried to move things
along as fast as I could, and even so
I didn't get to macros until page 160.
concluding with . . .
During the years we worked on Viaweb I
read a lot of job descriptions. A new
competitor seemed to emerge out of the
woodwork every month or so. The first
thing I would do, after checking to
see if they had a live online demo,
was look at their job listings. After
a couple years of this I could tell
which companies to worry about and
which not to. The more of an IT flavor
the job descriptions had, the less
dangerous the company was. The safest
kind were the ones that wanted Oracle
experience. You never had to worry
about those. You were also safe if
they said they wanted C++ or Java
developers. If they wanted Perl or
Python programmers, that would be a
bit frightening-- that's starting to
sound like a company where the
technical side, at least, is run by
real hackers. If I had ever seen a job
posting looking for Lisp hackers, I
would have been really worried.
It is all about rapid development.
var myObject = // Something with quite a few properties.
var props = new Dictionary<string, object>();
foreach (var prop in myObject.GetType().GetProperties())
{
props.Add(prop.Name, prop.GetValue(myObject, null);
}
Plugins are a great example.
Tools are another example - inspector tools, build tools, etc.
I will give an example of a c# solution i was given when i started learning.
It contained classes marked with the [Exercise] attribute, each class contained methods which were not implemented (throwing NotImplementedException). The solution also had unit tests which all failed.
The goal was to implement all the methods and pass all the unit tests.
The solution also had a user interface which it would read all class marked with Excercise, and use reflection to generate a user interface.
We were later asked to implement our own methods, and later still to understand how the user interface 'magically' was changed to include all the new methods we implemented.
Extremely useful, but often not well understood.
The idea behind this was to be able to query any GUI objects properties, to provide them in a GUI to get customized and preconfigured. Now it's uses have been extended and proved to be feasible.
EDIT: spelling
It's very useful for dependency injection. You can explore loaded assemblies types implementing a given interface with a given attribute. Combined with proper configuration files, it proves to be a very powerful and clean way of adding new inherited classes without modifying the client code.
Also, if you are doing an editor that doesn't really care about the underlying model but rather on how the objects are structured directly, ala System.Forms.PropertyGrid)
Without reflection no plugin architecture will work!
Very simple example in Python. Suppose you have a class that have 3 methods:
class SomeClass(object):
def methodA(self):
# some code
def methodB(self):
# some code
def methodC(self):
# some code
Now, in some other class you want to decorate those methods with some additional behaviour (i.e. you want that class to mimic SomeClass, but with an additional functionality).
This is as simple as:
class SomeOtherClass(object):
def __getattr__(self, attr_name):
# do something nice and then call method that caller requested
getattr(self.someclass_instance, attr_name)()
With reflection, you can write a small amount of domain independent code that doesn't need to change often versus writing a lot more domain dependent code that needs to change more frequently (such as when properties are added/removed). With established conventions in your project, you can perform common functions based on the presence of certain properties, attributes, etc. Data transformation of objects between different domains is one example where reflection really comes in handy.
Or a more simple example within a domain, where you want to transform data from the database to data objects without needing to modify the transformation code when properties change, so long as conventions are maintained (in this case matching property names and a specific attribute):
///--------------------------------------------------------------------------------
/// <summary>Transform data from the input data reader into the output object. Each
/// element to be transformed must have the DataElement attribute associated with
/// it.</summary>
///
/// <param name="inputReader">The database reader with the input data.</param>
/// <param name="outputObject">The output object to be populated with the input data.</param>
/// <param name="filterElements">Data elements to filter out of the transformation.</param>
///--------------------------------------------------------------------------------
public static void TransformDataFromDbReader(DbDataReader inputReader, IDataObject outputObject, NameObjectCollection filterElements)
{
try
{
// add all public properties with the DataElement attribute to the output object
foreach (PropertyInfo loopInfo in outputObject.GetType().GetProperties())
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of property to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.PropertyType.UnderlyingSystemType.FullName, false), null);
}
}
}
}
}
}
// add all fields with the DataElement attribute to the output object
foreach (FieldInfo loopInfo in outputObject.GetType().GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance))
{
foreach (object loopAttribute in loopInfo.GetCustomAttributes(true))
{
if (loopAttribute is DataElementAttribute)
{
// get name of field to transform
string transformName = DataHelper.GetString(((DataElementAttribute)loopAttribute).ElementName).Trim().ToLower();
if (transformName == String.Empty)
{
transformName = loopInfo.Name.Trim().ToLower();
}
// do transform if not in filter field list
if (filterElements == null || DataHelper.GetString(filterElements[transformName]) == String.Empty)
{
for (int i = 0; i < inputReader.FieldCount; i++)
{
if (inputReader.GetName(i).Trim().ToLower() == transformName)
{
// set value, based on system type
loopInfo.SetValue(outputObject, DataHelper.GetValueFromSystemType(inputReader[i], loopInfo.FieldType.UnderlyingSystemType.FullName, false));
}
}
}
}
}
}
}
catch (Exception ex)
{
bool reThrow = ExceptionHandler.HandleException(ex);
if (reThrow) throw;
}
}
One usage not yet mentioned: while reflection is generally thought of as "slow", it's possible to use Reflection to improve the efficiency of code which uses interfaces like IEquatable<T> when they exist, and uses other means of checking equality when they do not. In the absence of reflection, code that wanted to test whether two objects were equal would have to either use Object.Equals(Object) or else check at run-time whether an object implemented IEquatable<T> and, if so, cast the object to that interface. In either case, if the type of thing being compared was a value type, at least one boxing operation would be required. Using Reflection makes it possible to have a class EqualityComparer<T> automatically construct a type-specific implementation of IEqualityComparer<T> for any particular type T, with that implementation using IEquatable<T> if it is defined, or using Object.Equals(Object) if it is not. The first time one uses EqualityComparer<T>.Default for any particular type T, the system will have to go through more work than would be required to test, once, whether a particular type implements IEquatable<T>. On the other hand, once that work is done, no more run-time type checking will be required since the system will have produced a custom-built implementation of EqualityComparer<T> for the type in question.

Resources