The Kephas.Model package seems a bit heavy weight for my requirements of extensible metadata. Is there a lighter alternative? - reflection

My requirement is to use some kind of metadata system for the entities we use, but extensible, meaning that we need to support some kind of custom metadata additionally to querying for properties and methods. The standard Type/TypeInfo classes are useful to some extent, but they cannot be customized to add specific properties to support various patterns we have: tree nodes, master-detail, and other.
Kephas.Model provides a complex infrastructure for supporting such cases, including advanced features like mixins and dimensions, but this is however a bit too much for our system. We need something more lightweight for the code-first entities we have.
Is there a suggestion about what can we use for this kind of requirements? I noticed the Kephas.Reflection namespace, and this seems like a proper candidate, but I am not sure how to use it properly.

That's right, Kephas.Runtime namespace provides a lightweight extensible metadata through the base IRuntimeTypeInfo interface (in Kephas.Core package). There are mainly two ways of accessing it using extension methods:
// get the type information from an object/instance.
var typeInfo = obj.GetRuntimeTypeInfo();
// convert a Type/TypeInfo to a IRuntimeTypeInfo
typeInfo = type.AsRuntimeTypeInfo();
From here on you can manipulate properties, fields, methods, annotations (attributes), and so on, typically indexed by their names. A very nice feature is that IRuntimeTypeInfo is an expando, allowing adding of dynamic values at runtime.
Please note that IRuntimeTypeInfo specializes ITypeInfo (in Kephas.Reflection namespace), which is the base interface in Kephas.Model, too. You are right that Kephas.Model provides a more complex functionality which might make sense for a more elaborate metadata model, including entities, services, activities, and whatever classifiers you can think of, as well as loading the model also from sources other than the .NET reflection (JSON, XML, database, so on).
Another aspect is that up to version 5.2.0, the IRuntimeTypeInfo would be implemented by the sealed RuntimeTypeInfo class. Starting with version 5.3.0, it will be possible to provide your own implementations, which can be more than one.

Related

What is the difference between Mirror based reflection and traditional reflection?

Some languages like Dart use mirror based reflection so, in simple terms, what is the difference between such implementation and traditional reflection as you see in C# or Java.
Update:
I found this excellent (and somewhat quirky) video by Gilad Bracha on Mirror based reflection in Newspeak.
http://www.hpi.uni-potsdam.de/hirschfeld/events/past/media/100105_Bracha_2010_LinguisticReflectionViaMirrors_HPI.mp4 (mirror stuff starts at 7:42)
For many uses, I don't think mirrors will be that different than Java reflection. The most important thing understand about mirrors is that they decouple the reflection API from the standard object API, so instead of obj.getClass() you use reflect(obj). It's a seemingly small difference, but it gives you a few nice things:
The object API isn't polluted, and there's no danger of breaking reflection by overriding a reflective method.
You can potentially have different mirror systems. Say, one that doesn't allow access to private methods. This could end up being very useful for tools.
The mirror system doesn't have to be included. For compiling to JS this can be important. If mirrors aren't used then there's no out-of-band to access code and pruning becomes viable.
Mirrors can be made to work on remote code, not just local code, since you don't need the reflected object to be in the same Isolate or VM as the mirror.
Here's how mirrors are different than reflection in Java and Javascript when used to get an object's methods:
Java:
myObject.getClass().getMethods(); // returns an array
Dart:
reflect(myObject).type.methods; // returns a map
Javascript:
var methods = [];
for (var m in myObject) {
if (typeof m === 'function') {
methods.push(m);
}
}
Your best bet is this article by Gilad Bracha, Dart's co-designer and specification author. To get a glimpse, it will probably be enough to read the first chapter.
The abstract claims that mirrors adhere to three necessary principles that aren't followed by traditional reflection:
We identify three design principles for reflection and metaprogramming
facilities in object oriented programming languages. Encapsulation:
meta-level facilities must encapsulate their implementation.
Stratification: meta-level facilities must be separated from
base-level functionality. Ontological correspondence: the ontology of
meta-level facilities should correspond to the ontology of the
language they manipulate. Traditional/mainstream reflective
architectures do not follow these precepts. In contrast, reflective
APIs built around the concept of mirrors are characterized by
adherence to these three principles.
I would also point you to this other recent answer by Gilad, where he lists some other great reference material: How to get concrete object of a static method via mirror API?

Is reflection actually useful apart from reverse engineering?

Languages such as Java and PHP support reflection, which allows objects to provide metadata about themselves. Are there any legitimate use cases where you would need to be able to do something like ask an object what methods it has outside of the realm of reverse engineering? Are any of those use cases actually implemented today?
Reflection is used extensively in Java by frameworks which are leveraged at runtime to operate with other code dynamically. Without reflection, all links between code must be done at compile time (statically).
So, for example, any useful plug-in framework (OSGi, JSPF, JPF), leverages Reflection. Any injection framework (Spring, Guice, etc) leverages Reflection.
Any time you want to write a piece of code that will interact with another piece of code without having that piece of code available when compiling, Reflection is the way forward in Java.
However, this is best left to frameworks and should be encapsulated.
There certainly are good use cases. For example, obtaining developer-provided metadata. Java APIs are increasingly using annotations to provide info about methods/fields/classes and their use. Like input validation, binding to data representations... You could use these at compile-time to generate metadata descriptors and use those, but to do it at runtime would require reflection. Even if you used the metadata descriptors, they'd end up containing things like class, method and field names that'd need to be accessed via reflection.
Another use case: dynamic languages. Take Ruby... It allows you to check up-front whether an object would respond to a method name before trying to call that method. Something like that requires reflection.
Or how about when a class or method name must be provided from outside compiled code, like when selecting an implementation of some API. That's just gonna be a bit of text. Looking up what it resolves to comes down to reflection.
Frameworks like Spring or Hibernate make extensive use of reflection to inspect a class and see the annotations.
Frameworks for debugging, serialization, logging, testing...

The standard map/associative-array structure to use in flash actionscript 3?

I'm relatively new to flash, and is confused about what I should use to store and retrieve key value pairs. After some googling I've found various map-like things to choose from:
1) Use a Object:
var map:Object = new Object();
map["key"] = "value";
The problem is that it seems to lack some very basic features. For example to even get the size of map I'd have to write a util method.
2) Use a Dictionary
What does this standard library class provide over the simple object? It seems silly for it to exist if it's functionally identical to Object.
3) Go download some custom HashMap/HashTable implementation from the web.
I've used a lot of modern languages, and this is the first time I haven't been able to find a library implementation of an associative array within 5 minutes. So I'd like to get some best-practice advice from an experienced flash developer.
Thanks!
Maybe your google foo is a bit weak today?
But you're right, the built-in Object object, doesn't provide many extra features.
Dictionaries have at least two important differences with Objects:
They can use any object as a key. For Objects, the key has to be a string (if you pass any other object, the toString() method will be implicitly called).
You can optionally set they keys to be weak referenced (this doesn't make much sense for Objects).
Anyway, there are a number of opensource libraries that implement various data structures and collection types.
Just from the top of my head:
http://lab.polygonal.de/ds/
http://sibirjak.com/blog/index.php/collections/as3commons-collections/
You should use whichever option is needed for a particular situation. There is no correct answer to say "always use 'x'".
I've found that the vast majority of times Object based dictionaries are all that's needed. They're extremely fast and easy to use and I almost never need the extra features. It converts any key to a string which works well for most situations.
Dictionary provides some extra features but I've never needed object-based keys (and I've been programming in Flex since 1.0 alpha 1). The only time I've used a Dictionary is as a hack to get access to a weak reference since Flex doesn't provide a simple weak reference class.
More complex dictionaries are available which will provide more functionality. If you really need this functionality, then they will be useful, but I wouldn't recommend jumping in to use them when a plain old Object will work find for your needs. That said, if you do turn out to need them in your application, it might be best to using the same 3rd-party dictionary everywhere in that app for consistency and easy of maintenance.

Single overloaded method at BLL layer vs BLL methods having one-to-one correspondence with DAL methods

Assume we create 3-tier module, which enables us to display/create/edit articles. Articles are:
• organized into categories
• before article can be published, admin has to approve it by setting Approve field of Articles DB table to true
• by setting MembersOnly field ( Articles table ) admin can also specify whether particular article can be viewed by anybody or only by registered users
• Articles table also has ExpiredDate field, which tells when will the article expire and thus no longer be published
a) At DAL layer we provide methods GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory for retrieving articles from the DB
At BLL layer we use GetArticles() overloads to call all of the above DAL methods.
What are the some of the benefits of using single overloaded method at BLL layer instead of BLL methods having one-to-one correspondence with DAL methods? Only advantage I can think of is that this way same ObjectDataSource control can call two or more of GetArticles() overloads, depending on the value of parameters, for example:
public static List<Article> GetArticles(bool publishedOnly)
{
if (!publishedOnly)
return GetArticles();
...
}
If you don’t also design UI layer and thus can’t be sure which methods UI programmer will prefer the most, would it be best practice for DLL layer to provide four overloads of GetArticles + GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory?
2) When designing DAL methods to retrieve data from DB, how can you in advance know/predict ( without first designing the UI ), exactly which methods (for accessing DB) we should create at the DAL layer?
Namely, in the previous example I’ve had several methods for retrieving articles based on number of parameters ( based on category they belong to, whether we only want published articles etc). Assuming I’m selling this module to third party UI developers, then there is no way to know which data access methods they would prefer the most:
a) so should I create as many data access methods as I can think of ( one for getting all articles that are already expired, one for getting all articles that are already expired, but were never published, one for getting all articles that are not published, one for getting all articles that can be view by registered users only… ) ?
b) Even if all three layer are written by myself – should I still create as many data access methods as I can think of?
thanx
EDIT:
A common way to achieve this is to use interfaces to define the behavior of the API.
a) I’m not sure I understand this. Which class should implement this interface? Perhaps DLL class? In other words, if the name of my DLL class is Article, then third party would derive class named ChildArticle from Article, where ChildArticle would also implement this interface? Or did you mean something else?
b) Anyways, as far as I understand it, providing interface ( which declares defines additional DLL methods to retrieve articles from DB ) would also require DAL class to already have appropriate methods defined, which would be called by methods declared in the interface?
To your point, I believe it is a good idea to prefer fewer coarse-grained methods in the BLL to cover all the functionality required by an entire business operation
I’m not familiar with this term, but you’re prob suggesting that we should prefer overloaded GetArticles() over GetAllArticles, GetArticlesByCategory, GetPublishedArticles and GetPublishedArticlesByCategory?
A) The design of an API is strictly related to what it is meant to achieve and by whom it will be used.
In practice, this means that you should know the target audience of your API, and give them only what they need to get the job done.
Unless I personally interview the people that would buy my product, I can generally guess which methods they would find useful, but within that space, there are still any number of possible methods I could define. Thus, how should I know whether they would also find use for, say, GetArticles() overload which retrieves articles that already expired?!
On the other side it is perfectly fine to have many smaller data-centric methods in the DAL to work with specific pieces of data.
If not DLL, should DAL have as many data access methods as I can come up with ( for particular target audience of course )?
SECOND EDIT:
A few extensibility points can be built into the API to obtain a certain degree of flexibility. A common way to achieve this is to use interfaces to define the behavior of the API. This will allow consumers to replace or extend the pieces of the built-in functionality by providing custom implementations.
Assume I create a BLL layer and then provide some additional interfaces, which consumers could implement to extend BLL’s build-in functionality. But for consumers to be able to implement these interfaces, they will need to have access to BLL’s source code, right? But what if I don’t want consumers to view BLL’s source code?
Interfaces should exist between layers. More specifically, classes should interact with classes from other layers exclusively through interfaces
a) So even DAL’s built-in functionality should be exposed through interfaces? But why? Namely, if we’d use abstract class instead of interfaces, then this class could already implement some utility functions common to all provider classes that inherit from this abstract class?! On the other hand, if DAL uses interfaces instead, then utility functions common to all providers will have to be implemented once for each provider, which could mean a lot of redundant coding?!
b) Anyways, I don’t quite see the benefits ( except when we provide interfaces with which consumers could extend the basic functionality ) in having classes from different layers interacting through interfaces?
For added clarity, instead of overloading methods to work with different parameters, I believe it is better to have one method that accepts a single parameter. This parameter would be an object containing all the data for the method to work with. Some of that data could be required, some could be optional and would influence the effect of the operation.
If I know UI will extensively make use Object Data Source controls, should I still prefer BLL to define a single method (this method having as parameter an object with all the data for the method to work with) instead of method overloads?
cheers mate
I took the liberty to summarize your post in two main questions. I hope I managed to capture the essence of what you are asking.
Q) What is the relationship between the intefaces exposed by the DAL and the ones exposed by the BLL?
A) The BLL is an outward-facing API, and as such it should implement functionality that is useful to the external consumers of the application and expose it in a way that makes sense to them.
The DAL, on the contrary, is a inward-facing API that exposes functionality to retrieve and persist data in way that hides the details of the storage mechanism being used.
In short, the DAL focuses on how data is being represented and managed internally in the application, while the BLL focuses on exposing data in way that is meaningful to consumers.
Q) How many methods should a public API have, and which ones?
A) The design of an API is strictly related to what it is meant to achieve and by whom it will be used.In practice, this means that you should know the target audience of your API, and give them only what they need to get the job done.
Since it is impossible to predict all the possible ways an API will be used, it is important to decide which main use cases to support, and work to make them really straightforward in the API. A good principle to keep in mind is what Alan Kay once said:
Simple things should be simple,
complex things should be possible.
A few extensibility points can be built into the API to obtain a certain degree of flexibility. A common way to achieve this is to use interfaces to define the behavior of the API. This will allow consumers to replace or extend the pieces of the built-in functionality by providing custom implementations.
To your point, I believe it is a good idea to prefer fewer coarse-grained methods in the BLL to cover all the functionality required by an entire business operation.On the other side it is perfectly fine to have many smaller data-centric methods in the DAL to work with specific pieces of data.
UPDATE:
About interfaces
Interfaces should exist between layers. More specifically, classes should interact with classes from other layers exclusively through interfaces. For example, the DAL should expose interfaces for the classes used to access data, like IOrderHeaderTable or IOrderRepository depending on the design pattern being used.
The BLL should expose classes used to execute business operations, like IOrderManagementWorkflow, or ICustomerService.
Note: common functionality inside a layer can still be placed in base classes, since in modern Object-Oriented languages like C#, VB.NET and Java a class can both inherit from a base class and implement one or more interfaces.
Also, external parties who wish to customize the built-in functionality by implementing any of the provided public interfaces can do so without needing access to the source code. Interfaces should however be self-describing and well-documented, in order to make it easy for extenders to understand its semantics.
About the BLL
The BLL should be explicit about the business logic it supports. Therefore it is generally a good idea to have methods that are directly related to business operations.For added clarity, instead of overloading methods to work with different parameters, I believe it is better to have one method that accepts a single parameter. This parameter would be an object containing all the data for the method to work with. Some of that data could be required, some could be optional and would influence the effect of the operation.
Implementation detail: this kind of BLL API is fully supported by ObjectDataSource control built into ASP.NET Web Forms.
About the API
An API should contain all methods the designer can come up with, within the scope defined by the use cases the API is intended to support.

What should i use : functors, interfaces or abstract methods when writing an abstraction(compatibility) layer? (D language)

For example: a compatibility layer between scripting objects (like strings, arrays) or scripting engines( eval() ,readFile() etc.).
Without more context, I'd have to say interfaces as well. Consider that you can represent a function or delegate as an interface with a single method and that abstract classes are just interfaces with some methods potentially already implemented.
That said, it really depends on what you're trying to accomplish. Interfaces lend themselves to cases where you have lots of objects with a common interface but potentially varying implementations. If you are, for example, designing a very simple callback system for plugins (i.e.: let the plugin hook certain events in the host application) then delegates are probably simpler and sufficient for your needs.
Also keep in mind that if you do go with interfaces, you'll probably need some way for the host to instantiate instances. The easiest way to do this is by registering a delegate with the host under some unique name.
Abstract classes are only useful if you want to use interfaces and provide a default implementation of some things. A better solution in that case is to have an actual interface instead, and provide the default implementation as a mixin.
Interfaces have my vote. That way, as long as you define the interface any developer will be able to write something compatible fairly easily without you having to distribute too much code to them.

Resources