My spring web application is using servlet api 2.5 along with spring framework 4. Its deployed in tomcat 9. Its working fine. I am not sure why tomcat is not complaining about it as it needs servlet api 4 as per documentation. Is it backward compatible or spring is doing some magic? Just for clarity We are using interfaces from servlet api 2.5 in our code which should not compile with servlet api 4. It is compiling because we are compiling with 2.5 but we are expecting it to fail at runtime in tomcat. Thanks
Are you using methods/classes that no longer exist in servlet spec 4? There isn't magic - it's downward compatible. If you tried to run, for example, AsyncEvent code in Tomcat 5.5 (servlet spec 2.4) then yes, you'd have a problem. But running old code on a new server - without using things that have totally changed or disappeared - is rarely a problem.
EDIT
You said that the "old" code has:
Map parameterMap = request.getParameterMap();
This is still valid if not best practice. With newer compilers you may get a warning from the compiler that you are not using generics but it is still valid. And since the Javadocs from that time say:
Returns: an immutable java.util.Map containing parameter names as keys and parameter values as map values. The keys in the parameter map are of type String. The values in the parameter map are of type String array.
So your old code must be casting the keys as a String and the values as a String[] - just like the newer code that would be:
Map<String, String[]> parameterMap = request.getParameterMap();
Both versions will compile in recent versions of the compiler though, again, you may get warnings with the code that doesn't take advantage of generics.
The key is the use of generics - the <String, String[]> part. It's clearer code when using generics and the compiler can help with issues. With the non-generics version you could have tried to cast the key in the map to any object. It would compile but at runtime you'd likely get a ClassCastException.
From the perspective of Tomcat it's still returning the same thing.
Is that clearer?
Related
I have written a piece of code where I'm checking the size of an ArrayList like:
[1,2,3].size
All works well on Groovy Console and with Grails embedded Tomcat server. But once I deployed this code to Websphere Application Server, I receivec an exception stating
Exception evaluating property 'size' for java.util.ArrayList, Reason: groovy.lang.MissingPropertyException: No such property: size for class: java.lang.Integer.
After a while of debugging, testing and plenty of WTFs, I realized that there were parenthesis missing from the method call. The property notation should not work as there's no method getSize() for Collection (it's plain size()) and this all makes sense.
What's puzzling me, is why does someCollection.size work on Groovy Console and Grails?
Grails and Groovy Console version is 2.3.6
ArrayList in (at least) the Sun JDK 1.7u67 and in OpenJDK 1.6 holds a private int size, which is accessible to groovy. If your other environment uses another JDK, this var might not exist and groovy would fallback to the interpretation of [1,2,3]*.getSize(), which then fails.
I need to upgrade an existing rather large application from Symfony 2.0.15 to Symfony 2.4.x (replace with current version).
I'm not quite sure what would be the best strategy to do so. Migration critical features like forms or esi are used, of course :)
Upgrade "step by step" from one major version to another (2.1, 2.2, 2.3, 2.4)
Upgrade directly from 2.0.x to 2.4
Do you have any tips / experience to share ? Would appreciate it :)
Thanks,
Stephan
Each new version comes with an update UPGRADE-2.x.md file containing all intructions to convert your application from the immediately previous version.
I had to do that on my project as well, and I found the step-by-step method more natural and easier to manage. Fact is, there is no file such file as UPGRADE-2.0-to-2.4.md that would help you out for a direct conversion to 2.4.
I shall first recommend to make sure that none of your code uses obsolete functionnalities of Symfony 2.0 (not sure if there are deprecated parts in this version, though), because these can be removed in ulterior versions and will not be included in the UPGRADE file.
If you have done indeep modifications of the core Symfony code, you may find that some undocumented modifications are needed. For instance, there is a custom error handler in my project, extending the Symfony error handler. Well, although it was not documented in the UPGRADE file, the signature of ErrorHandler::handle() was modified and needed to be updated in my custom handler.
Similarly, I had to modify some namespaces because files had been moved in the framework code.
The conversion is still ongoing and I'm currently experiencing a weird error I'm trying to get rid of: The 'request' scope on services registered on custom events generates errors in the logs.
We have encountered a problem using Spring Portlet MVC 3.1 when using multiple controller classes and the DefaultAnnotationHandlerMapping.
Background
We are using Spring Portlet MVC 3.1 with annotations for the Render & Action phases
We are using JBoss EPP 5.1.1
Issue
For a Portlet render request with params, an incorrect page is rendered in the portlet
Cause
Spring Portlet MVC is using a different method for #RenderMapping than the expected method with the correct annotations
Technical Analysis
All our controllers contain #RenderMapping and #ActionMapping annotations, and all have “params” arguments to ensure that the expected method is invoked based on a parameter set in our portlet URLs. For default rendering, we have a method that has a #RenderMapping annotation with no “params” argument, which we use to render a blank JSP when the request contains no parameters.
Based on the reading of Chapter 7 and 8 in your book, we learnt that the Dispatcher Portlet tries to get the appropriate handler mapping for the incoming request and send it to the appropriate method in the controller bean configured. Our assumption was that our default #RenderMapping annotation (with no params) would only be invoked after it has checked that there are no other methods in the Controllers with an annotation that matches the specific request parameters.
However, we have debugged to realise that this assumption is incorrect. The DefaultAnnotationHandlerMapping appears to traverse through the available list of annotations in the Controller beans in some pre-defined order. This means that if the controller bean with the default #RenderMapping annotation (with no params)appears before in the list, the method with the default #RenderMapping annotation (with no params) will be invoked rather than the correct which is further down the list.
Manifested Error
We are developing in a Windows environment and deploying to a Linux environment. In Windows we see that the handler cycles through the controller beans in alphabetical order, so we initially solved our problem by adding the #RenderMapping annotated method with no params in the controller with the bean name closest to ‘Z’.
In Linux, however, it appears that the controller beans are detected in a different order. I have attached the Spring logs below to highlight the issue. The no params #RenderMapping annotation is in the YourDetailsController, and as you can see in the Windows log it appears last in the list, whereas in Linux it doesn’t. This means that if we try to access one of the controllers that appears after the YourDetailsController in the list we instead always end up hitting the no params annotation in the YourDetailsController instead.
Questions
Is our assumption incorrect?
Does our diagnosis reflect expected behaviour? Or is it a bug with Spring Portlet MVC?
Is there a different way to get the annotations scanned to form the handlermapping bean list?
Would using xml configuration (instead of annotations) remove our problem?
Would we able to define multiple handler mapping and order so that the default handler mapping is the last handler mapping used by the dispatcher portlet?
Any thoughts or advice you have on this problem would be greatly appreciated.
Mike. I'm experiencing the exact same problem. I'm using JDK 7, Spring 3.1.1.RELEASE and Hibernate 4.1.3.Final. I'm developing on Linux (Fedora) and deploying on Linux (Fedora and SL).
I was stuck because I was sure the pieces (controllers) were working one at a time but together the call to a render request was randomly ignored. Sometimes changing something would make things work again on a render request but they never worked all together.
As Walter suggested, when I isolated the controller containing only the default render request in its own package, left only the default render request in it (before I had the delete/view requests) and separated the scan of controllers in the portlet's XML configuration in two with the scanning of the default controller after the others, suddenly everything works like a charm.
It would be interesting to see if this bug is in the Spring tracker...
I'd been bitten by this problem recently, so thought I'd add some additional information based on what I found.
In my case, my default controller (with empty #Controller and #ActionMapping annotations) was always getting invoked, even though there were more specifically annotated controllers/actions (such as #Controller(XXXX) or #ActionMapping(YYYY)). What made my case weirder was that it worked OK in Tomcat/Pluto, but not in WAS/WebSphere Portal Server.
As it turns out, there is a bug introduced in 3.1.x of Spring that means the annotation handlers aren't sorted properly. See https://jira.springsource.org/browse/SPR-9303 and https://jira.springsource.org/browse/SPR-9605. Apparently, this is fixed in 3.1.3.
The big mystery to me was why it was working in Tomcat but not WebSphere? The underlying cause is that Pluto (2.0.3) uses Sun JRE 1.6.0 whereas WebSphere uses IBM JRE 1.5.0. The two JREs have a different implementation of Collections.sort() that results in a different output order when ordering array elements that are reporting they are equal (that is, the result of the compareTo() function). Because of the above Spring bug (which reports some handlers as being equal when it shouldn't) it means that the ordering of the handlers was non-deterministic across the two JREs.
So, in my case, the IBM JRE just happened to put the default controller as the very first element, and so it would be picked up every time. One way that we can affect the ordering of "equal" handlers (where "equal" is a dodgy definition due to the Spring bug) is to change the order that they are found by Spring - which affects the order of the input into the sort routine. That is why, per the above posts, moving the controller from the component scan to being explicitly listed in the XML config works. In my case, it was sufficient to make my default controller's package the last entry in my component scan. I didn't need to move it to the XML config.
Anyway, hope this helps shed a little more light on what is happening.
Response received from Ashish Sarin:
Hi Mike,
Though I haven't tested the exact same scenario that you are following
in your project, but I can say that it doesn't look like the right
approach to define your controllers. If your controllers only make use
of #RenderMapping and #ActionMapping annotations, then it might be
difficult for the developers to find out the exact controller
responsible for handling an incoming portlet request. I would
recommend that you make use of #RequestMapping at the type-level to
map portlet request to a particular controller, and use request
parameters to further narrow down the request to a particular method
in the controller.
Let me know if you still face any issue with this approach.
Mike, Your description is exactly the same issue we are running into. In Windows, we implemented the same workaround (prefixed the controller with the default rendering with a Z) and that solved it. That same code in a Linux environment has the same issues as yours. It looked like it was a times stamp issue ordering as the methods that weren't getting picked, but no luck going that route.
I assumed this was a spring bug.
I think the approach here is ok - we want different controllers handling different functions, but we want a default controller.
I just found one workaround for now. I moved the controller with the default rendering method to a different package so it is not included in the component-scan.
I add that controller manually (in the portletname-portlet.xml file) after the component-scan line, so it adds that as the last controller.
We use context:component-scan (in nnn-portlet.xml) to divide controllers default render mappings between portlet.
I know a bit about JDK and JRE source and binary compatibility (e.g. this and this), but not sure about the following situation:
Consider I have an application which is compiled using JDK5 and runs on JRE6. It uses some libraries (jars) which are also compiled using JDK5.
Now I want to compile my application using JDK6. What new problems could arise in runtime in such a case (particularly, in compatibility with the "old" jars)? Should I fully retest the application (touch every library) or can rely on promised JDK/JRE compatibility?
Normally no problems should arise if you set the compiler option of JDK6 to use 1.5 source compatibility. However sometimes this is not always true.
I remember once when compiling 1.4 code with 1.5 compiler (using 1.4 compatibility). The jars where ok (1.4 binary level) but the application crashed due to a funny conversion.
We used a BigDecimal number passing an integer as argument to the constructor. The 1.4 version had only a constructor from double but the 1.5 version had both, the int and the double constructors. So when compiling with 1.4 compiler made the automatic conversion from int to double, but with the 1.5 compiler it checked that the int constructor existed and did not realize that conversion. Then when using the perfect binary compatible code on 1.4 JRE the program crashed with a NoSuchMethodException.
I have to admit that it was a strange case, but it is one of those cases where logic does not work. So my advice is if you plan to compile for older versions of JRE try to use the target version JDK whenever possible.
Untill and unless you have not changed your code and added new Java 6 features, there should be no issues.
With regards to other jars there should be no issues at all.
JDK always maintains backward compatibility.
Compatibility mostly works. I would not expect any issue for you to arise aside from various warnings for e.g. not using generics. Maybe some barely used APIs were deprecated, but I assume they were left in place, just marked as deprecated.
Just try it, if it compiles you should be fine.
A key design aspect of Java - unfortunately - is full backwards compatibility.
There are very few exceptions where backwards compatibility was not preserved; most prominently Eclipse suffered when the sorting algorithm was changed from a stable to a non-stable sort algorithm (the order of objects that sort identically was no longer preserved); but that was never part of the Java specification, but a bug in Eclipse.
It's unfortunate, because there were a few poor choices that now cannot be changed. Iterator should not have had a remove() function in the API, Vector should not have been synchronized (solved by having ArrayList now), StringBuffer should not have been synchronized, hence StringBuilder. String should probably have been an interface, not a class, to allow for e.g. 8-bit strings, 32-bit strings - CharSequence is the better string interface, but too many methods do not accept CharSequence and require returning a String. Observable should be an interface too: you cannot make a subclass observable with this API. To name a few. But because of backwards compatibility, these cannot be fixed anymore until maybe JDK modularization (at which point some can at least disappear into an donotuse module ...).
Of course you should already have thousands of unit tests to help you test with the new JDK... :-)
I have a previous project running Ninject 2.0 runtime version 2.0 and now I am using Ninject in a new project and using the new Ninject, ninject web.mvc version 2.2 for runtime version 4.0.
Every single time I get the error no parameterless constructor
Invalid Operation exception
An error occurred when trying to create a controller of type HomeController'. Make sure that the controller has a parameterless public constructor.
What am I missing. All the bindings are registered.
Do I need to now define interfaces for Controllers as well such as HomeController as IHomeController as I have seen in some examples, Or do I get back to using the older version
There is one version that does not show activation exceptions properly but show this exception instead. Most likely the problem is a duplicated binding.
In addition to what Remo Gloor said, you might want to check that MVC is set up to use Ninject correctly. I was doing some things manually on an older version of the MVC plugin and ended up needing to just bite the bullet and make Global extend the NinjectHttpApplication class, which I had previously been avoiding.
The error you're getting is the error you would get if MVC tries using its built-in controller factory to produce controllers. So you may want to create a custom method binding on your controller class and put a breakpoint inside to make sure it's even being invoked.
You may also want to switch to version 2.3. You can pick up the latest builds of Ninject and all its extensions here.
I have seen this issue mentioned couple of times on forums where there is no direct answer, here is the solution to the above problem, i.e., working with latest ninject
Download the latest Ninject from github.
The ninject I got for MVC2 is named as Ninject.Web.Mvc2-2.2.0.0-release-net-4.0 (runtime version 4)
Now during adding reference add Ninject.Web.Mvc.dll(check the version is same as above by right click properties in VS)
Now Add Ninject.dll from the lib folder in same parent folder (check the version as above)
Now Add CommonServiceLocator.NinjectAdapter.dll from the extensions folder in lib parent folder (check the version as above.)
The missing link in all these have been the commonserviceLocator.dll and the correct version should match. This should be tried if you are sure your bindings are correct as mine were and check to see if your project work with older version.
Thanks to everyone, and good luck :)