What is the point of #WebInitParam? - servlets

#WebInitParam is an annotation that goes at class level.
It defines initialization parameters for the servlet.
I would like to know, what is the difference between doing this and using static variables, and why do it the #WebInitParam way rather than using statics?
What does defining k/v pairs as #WebInitParams allow you to do that you couldn't do if you declared static variables instead?
I have looked and all I can find is a million people saying how to define #WebInitParams. Well yes that's the easy bit. It's what I can do with that that is really what is of interest.
Thanks very much.

From a "raison d'etre" perspective, the annotation exists as a better design and architecture alternative to just peppering a class with static fields. #WebInitParam is a self-documenting approach to the initialization parameters a servlet or filter class needs, available via web.xml as well. It serves this purpose for the end developers as well as the JavaEE platform as a whole.
Think about it this way: in a vanilla project, you have the option of hardcoding a bunch of parameters in the class as static fields, or defining the same parameters in a property file. Which would you pick? Under what circumstances?
From a purely functional angle, apart from the function of using the annotation to override defaults set in web.xml, one other major reason is that you will sometimes need to get a hold of those initialization parameters from another component in your application. Using the annotation essentially increases the visibility of the parameter. In JSF for example, the API allows you to retrieve FacesServlet initialization parameters with:
//get all the parameters
FacesContext.getCurrentInstance().getExternalContext().getInitParameterMap()
//get a specific parameter
FacesContext.getCurrentInstance().getExternalContext().getInitParameter("aParameter");
In JSF-2.3 , it gets even more convenient with the following CDI-enabled injection:
#InitParameterMap Map<String,String> servletParameterMap;
Bear in mind that because it's CDI, it means this facility is available throughout the JavaEE platform, not just in web applications/JSF.
It's going to be a hassle to retrieve init parameters if the only mechanism available is a static field in the servlet class - you'll need to obtain an instance of the filter or servlet to get the static fields in it.
Separately, one could make the argument that maybe one should favour context-params over servlet-params because then, you get even more flexibility that isn't tied to any given servlet. That's a separate matter entirely :)

Related

Defining Symfony Services to maximize maintainability

I'm working on a big domain, for which maintainability is very important.
There are these general workers called ExcelHandlers that implement ExcelHandlerInterface (more on the interface in the ideas section) and basically get an UploadedFile as their input, upload them wherever they want and return the read data as an associative array. Now I have created this base class ExcelFileHandler which does all of these tasks for all excel files given two arguments:
1. The Directory to upload the file
2. the mapping of the excel columns to the indexes of the associative array.
Some ExcelHandlers might have to extend the ExcelFileHandler and do some more processing, in order to get the associative array of data.
The UploadedFile is always passed to the ExcelHandler from the controller.
Now here is the question. Given the generic structure of the ExcelFileHandler how should I define services for specific ExcelHandlers given that some only differ with the original one in the directory to upload the file and the mapping array.
My Ideas:
1. The first approach involves giving the directory and the mapping as the function arguments to ExcelHandleInterface::handle this will make the prototype something like handle(UploadedFile $file, array $mapping, $dir), $mapping and $dir are given to the function as arguments and passed to the handler by the controllers which has the parameters as constructor injections.
2.1 Defining the prototype of handle to be handle(UploadedFile $file), this would require the ExcelHandlers to have knowledge of $dir and $mapping. $dir will always be injected from the constructor.
2.1.1 Foreach individual ExcelHandler in the application, define a separate class e.g: UserExcelHandler, ProductExcelHandler, .... Which extend the ExcelFileHandler and leaves us again with two choices.
2.1.1.1 inject $mapping from outside. e.g:
// in the child class
public function __construct($dir, $mapping){
parent::__construct($dir, $mapping);
}
2.1.1.2 define $mapping in the constructor of the child class. e.g:
// in the child class
public function __construct($dir){
$mapping = array(0 => 'name', 1 => 'gender');
parent::__construct($dir, $mapping);
}
2.1.2 Not to create a class for each separate ExcelHandler and instead define the ExcelFileHandler as an abstract service and decorate with the right parameters to get the concrete ExcelHandler Service with the desired functionality, obviously ExcelHandlers with custom logic must be defined seperately, and to create a uniform code base, $mapping will always be injected from the Container in this case.
In your experience, what other paths can I take and which ones yield better results in the long term?
First of all, it seams as you've already put two separate things into one.
Uploading a file and reading it's contents are two separate concerns, which can change separately (like you said, $directory and $mapping can change case-by-case). Thus I would suggest to use separate services for uploading and reading the file.
See Single responsibility principle for more information.
Furthermore, due to very similar reasons, I would not recommend base classes at all - I'd prefer composition over inheritance.
Imagine that you have 2 methods in your base class: upload, which stores file to a local directory, and parse, which reads excel file and maps columns to some data structure.
If you need to store file in a remote storage (for example FTP), you need to override upload method. Let's call this service1.
If you need to parse file differently, for example combining data from 2 sheets, you need to override parse method. Let's call this service2.
If you need to override both of these methods, while still being able to get service1 and service2, you're stuck, as you'll need to copy-and-paste the code. There's no easy way to use already written functionality from (1) and (2).
In comparison, if you have interface with upload method and interface with parse method, you can inject those 2 separate services where you need them as you need them. You can mix any implementations of those already available. All of them are already tested and you do not need to write any code - just to configure the services.
One more thing to mention - there is absolutely no need to create (and name) classes by their usage. Name them by their behaviour. For example, if you have ExcelParser, which takes $mapping as an argument to a constructor, no need to call it UserExcelParser if the code itself has nothing to do with users. If you need to parse data from several sheets, just create SheetAwareExcelParser etc., not ProductExcelParser. This way you can reuse the code. Also correct naming lets understand the code more easily.
In practice, I've seen when function or class is named by it's usage, and then it's just used in another place with no renaming, refactoring etc. These cases are really not what you're looking for.
Service names (in another words concrete objects, not classes), can of course be named by their purpose. You just configure them with any required functionality (single vs separate sheets etc.)
To summarize, I would use 2.1.2 above all other of your given options. I would inject $dir and $mapping via DI in any case, as these do not change in runtime - these are configuration.

DefaultAnnotationHandlerMapping for portlets vs. servlets

I have been developing with Spring MVC in a Servlet environment for quite some time, mostly on Spring 3.1.x and 3.2.x, but recently moved jobs to a company that uses a mixture of Portlets and Servlets using Spring 3.0.6.
Fist, there seems to be two versions of DefaultAnnotationHandlerMapping, one for Portlets and one for Servlets. My limited knowledge of Portlets makes me believe they are just an extention of Servlets, so I can vaguely understand the need to have two separate HandlerMappings, one to handle generic Servlets, the other to handle specifics to Portlest. What is throwing me is how they each seems to behave differently with regaurds to how they handle the #RequestMapping annotations, and how my new employer has chosen to set things up.
In my past, I have had one DispatchHandler for the entire application. I know you can have multiple DispatchHandlers, but I never saw the benefit past segregation of sub-context XMLs. Anyhow, in the past I would have a DispatchHandler mapped to "/", and then each Controller set a mapping on top using #RequestMapping("/someUrl") and then another #RequestMapping on each handler method further extending the URL mapping. So, from a browser I would get something like:
http://somehost.com/myWar/myController/myMethod
where the first "/myWar" is mapped to the WAR deployment, the next "/" is mapped to the DispatchServlet, "/myController" is the Controller's mapping, and anything after is matched to some method using a combination of URL signature, request method, parameters, etc.
Now lets evaluate what I am seeing at my new firm. First, each Controller gets its own DispatchServlet, so there is like 20-30 DispatchServlets defined, all with different base URIs mapped, some with the same base Application Context XML, some with different ones, all using the same Application Context from the ContextLoaderListener. Each of the subsequent DispatchServlet's path is the same as the #RequestMapping on their "mapped" Controller, so in the above example DispatchServlet would have the path "/myController" and the Controller would have #RequestMapping("/myController"). On top of that, at the Controller's method level, the value specified on #RequestMapping seems to totally ignore any additional pathing applied to it.
So, for instance, if I have a Controller with #RequestMapping("/myController"), followed by a method with #RequestMapping("/edit"), hitting the URL http://somehost.com/myWar/myContoller/edit gives me a 404. However, if I change the method-level #RequestMapping to #RequestMapping(parameter="actionMethod=edit"), and the URL to http://somehost.com/myWar/myController?actionMethod=edit, it maps just fine and I get my page. Further, if I change my method-level #RequestMapping to #RequestMapping(value="/edit",params="actionMethod=view") neither http://somehost.com/myWar/myController?actionMethod=edit nor http://somehost.com/myWar/myController/edit?actionMethod=edit work at all. The problem is that now to get to anything besides the default mapping at the method level requires me to take on "?actionMethod=edit", instead of simply extending the URI with the added paths. Also, it prevents me from using a RESTful approach, since now I can't define something like #RequestMapping("/somePath/{id}").
I have so much confussion in my brain, like why is the DispatchServlet mapping in the web.xml overlapping the Controllers #RequestMapping, and why is the #RequestMapping at the Controller's method level not working when I set the value property to a valid URI mapping, but works fine when I set some parameter mapping?
Oh, and sorry for the long posting. I just want to get a better understanding of what is happening here so I can either deal with it with an intellegent approach, or go back to my new group and say "this should be changed, and this is why". I can extend this lengthy question with some simplified code examples if that helps.

How does ninject work at a high level, how does it intercept object instantiation?

At a high level, how do these dep. injection frameworks work?
I can understand if you always instantiate an object via a custom factory like:
IUser user = DepInjector.Get<User>();
I'm guessing what happens is, wherever you defined the mappings, it will look at the type you want and try and find a match, if found, it will via reflection instantiate the type.
Are there dep. inj. frameworks that would work like:
IUser user = new User();
If so, how would it get the correct user, where is it hooking into the CLR to do this? In case of an asp.net website, is it any different?
If you want to know how Ninject works then the obvious place to start would be reading How Injection Works on their official wiki. It does use reflection but it now also uses dynamic methods:
"By default, the StandardKernel will
create dynamic methods (via
System.Reflection.Emit.DynamicMethod)
that can be used to inject values into
the different injection targets. These
dynamic methods are then triggered via
delegate calls."
As for you second example, I don't believe there are any DI frameworks that would do what you ask. However, constructor injection tends to be most common way of implementing IoC, so that when a class is constructed it knows what type to bind to via some configuration binding. So in your example IUser would be mapped to concrete User in config bindings so that any consuming class that has an IUser parameter as part of its constructor would get the correct User type passed in.
AFAIK there's no way to "hook into" object instantiation with the CLR. The only way to use DI in the second case would be to employ an assembly rewriter (i.e. a postprocessor similar to PostSharp) to replace the call to new with a call to the DI factory method (i.e. GetUser) in the compiled code.

Asp.net MVC RouteBase and IoC

I am creating a custom route by subclassing RouteBase. I have a dependency in there that I'd like to wire up with IoC. The method GetRouteData just takes HttpContext, but I want to add in my unit of work as well....somehow.
I am using StructureMap, but info on how you would do this with any IoC framework would be helpful.
Well, here is our solution. Many little details may be omitted but overall idea is here. This answer may be a kind of offtop to original question but it describes the general solution to the problem.
I'll try to explain the part that is responsible for plain custom HTML-pages that are created by users at runtime and therefore can't have their own Controller/Action. So the routes should be either somehow built at runtime or be "catch-all" with custom IRouteConstraint.
First of all, lets state some facts and requirements.
We have some data and some metadata about our pages stored in DB;
We don't want to generate a (hypothetically) whole million of routes for all of existing pages beforehand (i.e. on Application startup) because something can change during application and we don't want to tackle with pushing the changes to global RouteCollection;
So we do it this way:
1. PageController
Yes, special controller that is responsible for all our content pages. And there is the only action that is Display(int id) (actually we have a special ViewModel as param but I used an int id for simplicity.
The page with all its data is resolved by ID inside that Display() method. The method itself returns either ViewResult (strongly typed after PageViewModel) or NotFoundResult in case when page is not found.
2. Custom IRouteConstraint
We have to somewhere define if the URL user actually requested refers to one of our custom pages. For this we have a special IsPageConstraint that implements IRouteConstraint interface. In the Match() method of our constraint we just call our PageRepository to check whether there is a page that match our requested URL. We have our PageRepository injected by StructureMap. If we find the page then we add that "id" parameter (with the value) to the RouteData dictionary and it is automatically bound to PageController.Display(int id) by DefaultModelBinder.
But we need a RouteData parameter to check. Where we get that? Here comes...
3. Route mapping with "catch-all" parameter
Important note: this route is defined in the very end of route mappings list because it is very general, not specific. We check all our explicitly defined routes first and then check for a Page (that is easily changeable if needed).
We simply map our route like this:
routes.MapRoute("ContentPages",
"{*pagePath}",
new { controller = "Page", action = "Display" }
new { pagePath = new DependencyRouteConstraint<IsPageConstraint>() });
Stop! What is that DependencyRouteConstraint thing appeared in mapping? Well, thats what does the trick.
4. DependencyRouteConstraint<TConstraint> class
This is just another generic implementation of IRouteConstraint which takes the "real" IRouteConstraint (IsPageConstraint) and resolves it (the given TConstraint) only when Match() method called. It uses dependency injection so our IsPageConstraint instance has all actual dependencies injected!
Our DependencyRouteConstraint then just calls the dependentConstraint.Match() providing all the parameters thus just delegating actual "matching" to the "real" IRouteConstraint.
Note: this class actually has the dependency on ServiceLocator.
Summary
That way we have:
Our Route clear and clean;
The only class that has a dependency on Service Locator is DependencyRouteConstraint;
Any custom IRouteConstraint uses dependency injection whenever needed;
???
PROFIT!
Hope this helps.
So, the problem is:
Route must be defined beforehand, during Application startup
Route's responsibility is to map the incoming URL pattern to the right Controller/Action to perform some task on request. And visa versa - to generate links using that mapping data. Period. Everything else is "Single Responsibility Principle" violation which actually led to your problem.
But UoW dependencies (like NHibernate ISession, or EF ObjectContext) must be resolved at runtime.
And that is why I don't see the children of RouteBase class as a good place for some DB work dependency. It makes everything closely coupled and non-scalable. It is actually impossible to perform Dependency Injection.
From now (I guess there is some kind of already working system) you actually have just one more or less viable option that is:
To use Service Locator pattern: resolve your UoW instance right inside the GetRouteData method (use CommonServiceLocator backed by StructureMap IContainer). That is simple but not really nice thing because this way you get the dependency on static Service Locator itself in your Route.
With CSL you have to just call inside GetRouteData:
var uow = ServiceLocator.Current.GetService<IUnitOfWork>();
or with just StructureMap (without CSL facade):
var uow = ObjectFactory.GetInstance<IUnitOfWork>();
and you're done. Quick and dirty. And the keyword is "dirty" actually :)
Sure, there is much more flexible solution but it needs a few architectural changes. If you provide more details on exactly what data you get in your routes I can try to explain how we solved our Pages routing problem (using DI and custom IRouteConstraint).

ASP.NET MVC routing based on data store values

How would you tackle this problem:
I have data in my data store. Each item has information about:
URL = an arbitrary number of first route segments that will be used with requests
some item type = display will be related to this type (read on)
title = used for example in navigation around my application
etc.
Since each item can have an arbitrary number of segments, I created a custom route, that allows me to handle these kind of requests without using the default route and having a single greedy route parameter.
Item type will actually define in what way should content of a particular item be displayed to the client. I was thinking of creating just as many controllers to not have too much code in a single controller action.
So how would you do this in ASP.NET MVC or what would you suggest would be the most feasible way of doing this?
Edit: A few more details
My items are stored in a database. Since they can have very different types (not inheritable) I thought of creating just as many controllers. But questions arise:
How should I create these controllers on each request since they are related to some dynamic data? I could create my own Controller factory or Route handler or possibly some other extension points as well, but which one would be best?
I want to use MVC basic functionality of using things like Html.ActionLink(action, controller, linkText) or make my own extension like Html.ActionLink(itemType, linkText) to make it even more flexible, so Action link should create correct routes based on Route data (because that's what's going on in the background - it goes through routes top down and see which one returns a resulting URL).
I was thinking of having a configuration of relation between itemType and route values (controller, action, defaults). Defaults setting may be tricky since defaults should be deserialized from a configuration string into an object (that may as well be complex). So I thought of maybe even having a configurable relation between itemType and class type that implements a certain interface like written in the example below.
My routes can be changed (or some new ones added) in the data store. But new types should not be added. Configuration would provide these scenarios, because they would link types with route defaults.
Example:
Interface definition:
public interface IRouteDefaults
{
object GetRouteDefaults();
}
Interface implementation example:
public class DefaultType : IRouteDefaults
{
public object GetRouteDefaults()
{
return new {
controller = "Default",
action = "Show",
itemComplex = new Person {
Name = "John Doe",
IsAdmin = true
}
}
}
Configuration example:
<customRoutes>
<route name="Cars" type="TypeEnum.Car" defaults="MyApp.Routing.Defaults.Car, MyApp.Routing" />
<route name="Fruits" type="TypeEnum.Fruit" defaults="MyApp.Routing.Defaults.Fruit, MyApp.Routing" />
<route name="Shoes" type="TypeEnum.Shoe" defaults="MyApp.Routing.Defaults.Shoe, MyApp.Routing" />
...
<route name="Others" type="TypeEnum.Other" defaults="MyApp.Routing.Defaults.DefaultType, MyApp.Routing" />
</customRoutes>
To address performance hit I can cache my items and work with in-memory data and avoid accessing the database on each request. These items tend to not change too often. I could cache them for like 60 minutes without degrading application experience.
There is no significant performance issue if you define a complex routing dictionary, or just have one generic routing entry and handle all the cases yourself. Code is code
Even if your data types are not inheritable, most likely you have common display patterns. e.g.
List of titles and summary text
item display, with title, image, description
etc
If you can breakdown your site into a finite number of display patterns, then you only need to make those finite controllers and views
You them provide a services layer which is selected by the routing parameter than uses a data transfer object (DTO) pattern to take the case data and move it into the standard data structure for the view
The general concept you mention is not at all uncommon and there are a few things to consider:
The moment I hear about URL routing taking a dependency on data coming from a database, the first thing I think about is performance. One way to alleviate potentialy performance concerns is to use the built in Route class and have a very generic pattern, such as "somethingStatic/{*everythingElse}". This way if the URL doesn't start with "somethingStatic" it will immediately fail to match and routing will continue to the next route. Then you'll get all the interesting data as the catch-all "everythingElse" parameter.
You can then associate this route with a custom route handler that derives from MvcRouteHandler and overrides GetHttpHandler to go to the database, make sense of the "everythingElse" value, and perhaps dynamically determine which controller and action should be used to handle this request. You can get/set the routing values by accessing requestContext.RouteData.Values.
Whether to use one controller and one action or many of one or many of each is a discussion unto itself. The question boils down to how many different types of data do you have? Are they mostly similar (they're all books, but some are hardcover and some are softcover)? Completely different (some are cars, some are books, and some are houses)? The answer to this should be the same answer you'd have if this were a computer programming class and you had to decide from an OOP perspective whether they all have a base class and their own derives types, or whether they could be easily represented by one common type. If they're all different types then I'd recommend different controllers - especially if each requires a distinct set of actions. For example, for a house you might want to see an inspection report. But for a book you might want to preview the first five pages and read a book review. These items have nothing in common: The actions for one would never be used for the other.
The problem described in #3 can also occur in reverse, though: What if you have 1,000 different object types? Do you want 1,000 different controllers? Without having any more information, I'd say for this scenario 1,000 controllers is a bit too much.
Hopefully these thoughts help guide you to the right solution. If you can provide more information about some of the specific scenarios you have (such as what kind of objects these are and what actions can apply to them) then the answer can be refined as well.

Resources