JDK, JRE an JARs compatibility - compatibility

I know a bit about JDK and JRE source and binary compatibility (e.g. this and this), but not sure about the following situation:
Consider I have an application which is compiled using JDK5 and runs on JRE6. It uses some libraries (jars) which are also compiled using JDK5.
Now I want to compile my application using JDK6. What new problems could arise in runtime in such a case (particularly, in compatibility with the "old" jars)? Should I fully retest the application (touch every library) or can rely on promised JDK/JRE compatibility?

Normally no problems should arise if you set the compiler option of JDK6 to use 1.5 source compatibility. However sometimes this is not always true.
I remember once when compiling 1.4 code with 1.5 compiler (using 1.4 compatibility). The jars where ok (1.4 binary level) but the application crashed due to a funny conversion.
We used a BigDecimal number passing an integer as argument to the constructor. The 1.4 version had only a constructor from double but the 1.5 version had both, the int and the double constructors. So when compiling with 1.4 compiler made the automatic conversion from int to double, but with the 1.5 compiler it checked that the int constructor existed and did not realize that conversion. Then when using the perfect binary compatible code on 1.4 JRE the program crashed with a NoSuchMethodException.
I have to admit that it was a strange case, but it is one of those cases where logic does not work. So my advice is if you plan to compile for older versions of JRE try to use the target version JDK whenever possible.

Untill and unless you have not changed your code and added new Java 6 features, there should be no issues.
With regards to other jars there should be no issues at all.
JDK always maintains backward compatibility.

Compatibility mostly works. I would not expect any issue for you to arise aside from various warnings for e.g. not using generics. Maybe some barely used APIs were deprecated, but I assume they were left in place, just marked as deprecated.
Just try it, if it compiles you should be fine.
A key design aspect of Java - unfortunately - is full backwards compatibility.
There are very few exceptions where backwards compatibility was not preserved; most prominently Eclipse suffered when the sorting algorithm was changed from a stable to a non-stable sort algorithm (the order of objects that sort identically was no longer preserved); but that was never part of the Java specification, but a bug in Eclipse.
It's unfortunate, because there were a few poor choices that now cannot be changed. Iterator should not have had a remove() function in the API, Vector should not have been synchronized (solved by having ArrayList now), StringBuffer should not have been synchronized, hence StringBuilder. String should probably have been an interface, not a class, to allow for e.g. 8-bit strings, 32-bit strings - CharSequence is the better string interface, but too many methods do not accept CharSequence and require returning a String. Observable should be an interface too: you cannot make a subclass observable with this API. To name a few. But because of backwards compatibility, these cannot be fixed anymore until maybe JDK modularization (at which point some can at least disappear into an donotuse module ...).
Of course you should already have thousands of unit tests to help you test with the new JDK... :-)

Related

Tomcat 9 and Servlet API 2.5

My spring web application is using servlet api 2.5 along with spring framework 4. Its deployed in tomcat 9. Its working fine. I am not sure why tomcat is not complaining about it as it needs servlet api 4 as per documentation. Is it backward compatible or spring is doing some magic? Just for clarity We are using interfaces from servlet api 2.5 in our code which should not compile with servlet api 4. It is compiling because we are compiling with 2.5 but we are expecting it to fail at runtime in tomcat. Thanks
Are you using methods/classes that no longer exist in servlet spec 4? There isn't magic - it's downward compatible. If you tried to run, for example, AsyncEvent code in Tomcat 5.5 (servlet spec 2.4) then yes, you'd have a problem. But running old code on a new server - without using things that have totally changed or disappeared - is rarely a problem.
EDIT
You said that the "old" code has:
Map parameterMap = request.getParameterMap();
This is still valid if not best practice. With newer compilers you may get a warning from the compiler that you are not using generics but it is still valid. And since the Javadocs from that time say:
Returns: an immutable java.util.Map containing parameter names as keys and parameter values as map values. The keys in the parameter map are of type String. The values in the parameter map are of type String array.
So your old code must be casting the keys as a String and the values as a String[] - just like the newer code that would be:
Map<String, String[]> parameterMap = request.getParameterMap();
Both versions will compile in recent versions of the compiler though, again, you may get warnings with the code that doesn't take advantage of generics.
The key is the use of generics - the <String, String[]> part. It's clearer code when using generics and the compiler can help with issues. With the non-generics version you could have tried to cast the key in the map to any object. It would compile but at runtime you'd likely get a ClassCastException.
From the perspective of Tomcat it's still returning the same thing.
Is that clearer?

Difference between Newtonsoft Json DynamicValueProvider and ReflectionValueProvider?

I'm in the process of porting a Asp.Net Core Website targeting the full framework to a website that targets Asp.Net Core 3.
In that process I have hit a snag. The website references the Newtonsoft 11.0.3 NuGet package and among other things uses the Newtonsoft.Json.Serialization.DynamicValueProvider class.
Interestingly that class exists when targeting the full framework but does not exist when targeting netcoreapp3.1 and so Visual Studio is producing compilation errors stating that the class doesn't exist. At first that seemed crazy to me, but I checked the source code for the class and sure enough it contains the following conditional compilation statement wrapped around the whole class
#if HAVE_REFLECTION_EMIT
Apparently the netStandard 2.0 dll in the NuGet package that my netcoreapp3.1 project would use causes the conditional compilation statement to not include the DynamicValueProvider class.
So I did some poking around in the Newtonsoft.Json.Serialization namespace and I see that there is a ReflectionValueProvider class available that does not contain such conditional compilation and is available when targeting netcoreapp3.1
I've looked at the source code for both the DynamicValueProvider class and the ReflectionValueProvider class and I'm unclear on the difference. Both appear to get or set the value of a property or member type based on the MemberInfo passed in into the constructor. Both appear to use reflection to accomplish their work. As I mentioned, apparently DynamicValueProvider needs reflection Emit ability and ReflectionValueProvider does not. Emit ability apparently is used to Emit IL as best I can tell.
So I wonder if perhaps the two are drop in replacements for each other except that maybe DynamicValueProvider might be faster since it apparently leverages IL Emitting. But that's just a hunch. I'd prefer to have a more concrete understanding of the differences between the two classes before I start swapping the one for the other in this existing codebase as a way to get to .Net Core 3.
Can you provide me with better insight into the differences between the DynamicValueProvider
class and the ReflectionValueProvider class, or at least confirm my hunch?
We had updated Newtonsoft.Json from 9.* to 12.0.3 version, and observed performance degradation on paths that includes json serialization. All paths lead to DynamicValueProvider. Fortunately, we had global descendant for DefaultContractResolver, and I was able to overload CreateMemberValueProvider method to return ReflectionValueProvider.
For now we are continue testing the new version, but I can say that from performance perspective ReflectionValueProvider works faster than DynamicValueProvider.
I think there is a correlation with the fact that NetStandard 2.0 is also used to build Xamarin.Forms applications for iOS which require an AOT compilation.
As written here:
Limitations of Xamarin.iOS
"Since applications using Xamarin.iOS are compiled to static code, it is not possible to use any facilities that require code generation at runtime."
and
"No Dynamic Code Generation.
The System.Reflection.Emit is not available"
For example, this is a System.Text.Json limitation that actually cannot be used in Xamarin.Forms projects for iOS. More info here.
System.Text.Json Serializer does not appear to work on Xamarin iOS

Running Ada on the Zynq using a Digilent Zybo development board

I've been successfully using Vivado and the SDK to develop VHDL and C for the Zynq XC7Z010 on a Digilent Zybo board. I've also been using the GNAT GPS IDE to learn Ada targeted to an STM32F4 processor (using one of the supported development boards).
GPS also ships with a set of zynq7000 run-times targeted to the XC7Z020 (as far as I can tell). Having looked through the BSPs for these target I believe that the code generated should also run on the XC7Z010 as the ARM cores appear to be the same. It may turn out that there are differences, in which case I will have a go at building a specific run-time based on the existing zynq7000 BSP (Adacore have documented this process and give an example for generating a new STM32F4 BSP).
My main problem is I'm not sure how to load and run the generated Ada elf file on my Zybo. I have tried to generate a BOOT.ini file containing a FSBL (built with the SDK and using my exported hardware from Vivado), a bit-stream and the Ada elf file (The the Zybo has an MicroSD interface that can be configured as a boot device, this works perfectly with a bit-stream and C elf produced via Vivado / SDK).
Anyway, this didn't work... I'm guessing that it might be a linking issue, or a boot loader issue, or similar. With my current level of knowledge I'm just not sure at this stage.
Any advice or pointers would be greatly appreciated!
It turns out that my BOOT.ini was fine, the problem was related to accessing custom AXI registers defined in my bit-stream. If I remove these references from the Ada the generated ELF file works perfectly. For example, printing over the Zybo's VCP using Text_IO.Put_Line(), using Ada run-time delay and Clock operations etc.
For some reason the AXI interface isn't working when I boot an Ada ELF file. If I substitute this for the equivalent C, then all is well.
This particular problem is currently unresolved, but not related to my original question!
(It might be that the Ada run-time is relying on the FSBL or u-Boot to have initialised this, not sure. Feel free to comment if you know, I'll also add a comment when I resolve this)
**** Update ****
Here is some additional background and a description of what I had to do to get my custom AXI IPs to work.
The provided AdaCore BSP (Board Support Package used to build the run-time) is targeted at the Xilinx XC702 development board. I'm using a Digilent Zybo (the older version). The two boards use different Zynq parts, the XC702 is based on a XC7Z020 and the Zybo uses a XC7Z010 (there is a new version with a XC7Z020 option).
I followed the AdaCore instructions (available on their web site) and built a BSP specifically for the Zybo. Initially I just updated the clock details as the Zybo runs at a different speed and then verified that the Ada delay function worked correctly (provided as part of the Ravenscar run-time built from the updated BSP). However, my custom AXI IPs still didn't work...
To cut a long story short, the Ada run-time contains as assembly file called start-ram.S that amongst other things sets up the MMU. There is an include file called memmap.inc that contains the actual MMU page definitions as a series of .long directives. I had to update the AXI_GP0 address entry by editing the particular directive to,
.long 0x43c10c16 # for 0x43c00000, axi_gp0
Previously it was set to 0x00000000 # for 0x43c00000, *none*. These entries are decoded within start-ram.S and then used to configure the MMU (the top 12 bits set the page and the remaining bits are chopped up and used as page config).
So, once I edited this file in my Zybo BSP and re-built the run-time, the IPs became accessible from the PS and worked as expected. This all took a while to figure out, but was worth it as I learn loads whilst exploring the dead ends!
I hope this helps someone in the future, I also highly recommend Ada for Zynq development especially if you ultimately need DO-178 certification, or similar.

Can QML caching in Qt 5.8 be disabled for a particular project?

Qt 5.8 was supposed to come with the optional use ahead of time qtquick compiler, instead it arrived with a sort-of-a-jit-compiler, a feature that's enabled by default and caches compiled QML files on disk in order to improve startup performance and reduce memory usage.
The feature however arrives with serious bugs which greatly diminish, or in my case even completely negate its benefits, as I didn't have a problem with startup times to begin with, and testing didn't reveal any memory usage improvements whatsoever.
So what I would like to do is opt out of that feature in my project, but I don't seem to find how to do that. Going back to Qt 5.7.1 is not an option since my project relies on other new features, introduced with 5.8.
Add QML_DISABLE_DISK_CACHE (set to 1) to your environment variables. You should be able to do it inside your application via qputenv -- put it somewhere in main before loading QML content.
Credit to peppe for informing us of the environment variable, but qputenv()only takes a QByteArray as the value parameter, so 1 won't work.
The two options that work:
qputenv("QML_DISABLE_DISK_CACHE", "1"); // or
qputenv("QML_DISABLE_DISK_CACHE", "true");
This successful disables the cache and prevents the associated bugs from manifesting.

I want my DotNetNuke modules to work under as many versions as possible while avoiding assembly binding redirection

I am developing DotNetNuke modules and naturally want them compiled before installing or distributing them. In the past I've simply referenced a specific version of DotNetNuke.dll by browsing to the /BIN folder of a local DotNetNuke installation.
This reference allowed me to use the DNN base classes and create my own set of classes upon those. I also use various helper methods throughout the DNN namespaces/classes that I require. (i.e. Make derived classes from their PortalModuleBase, ModuleSettingsBase and use their Localization classes which replace those provided by Microsoft's ASP.NET implementation.)
I've been able to get away with this approach making that direct DLL reference (Copy Local = True, Specific Version = False) because until now I've been installing these modules onto client websites that I maintain. As such, I've kept them on at least the version of DotNetNuke I've been developing on - or newer. Most recently, I was referencing 6.1.3.108 in development.
NOTE: This automatically copies in the following associated DLLs into the /BIN directory of my modules:
DotNetNuke.dll
DotNetNuke.Instrumentation.dll
dotnetnuke.log4net.dll
DotNetNuke.Services.Syndication.dll
DotNetNuke.Web.Client.dll
DotNetNuke.WebControls.dll
DotNetNuke.WebUtility.dll
Installing this onto a DotNetNuke site of a NEWER version worked fine, which isn't a bad start.
What I've been wondering though, is if there is a non-hackish way of making my modules insensitive to the minor, build or revision levels of the DLL?
I realize that it makes it my responsibility to ensure the product (if developed on a "mid range" version) still works on slightly earlier as well as newer versions of the product. That said, I feel I can do thorough testing across those builds. To me this is preferential to having to run the OLDEST major build in development.
Put another way, I'd rather not develop with references to 6.0.0.0 just so it works on 6.x.x.x without extra effort. I'll only do that if someone doesn't have a brilliant way for me to make referencing say, 6.1.3.108 working on slightly earlier or later versions. (Naturally I'm okay with having to make a different module for major version changes, such as 5.x.x.x or 7.x.x.x.)
Thanks in advance!
Instead of referencing the assemblies in the bin folder, keep a copy of DotNetNuke.dll (and any other references) with your source code, and reference it there. Put the oldest supported version there, but develop on a newer site. Set Copy Local=False on the reference so you don't overwrite the newer version, and you should be fine.
In this way, we're able to reference DNN 4.5.3 while developing a module that runs on DNN 6.1.x. I've been using this method for years without any significant problems (except when I occasionally forget to turn off Copy Local and my DNN site mysteriously blows up).
In regards to determining the version of DNN in a class you've subclassed from a DNN one.
Here's what I would do, assuming YourClass inherits from DNNClass, but because you have referenced an earlier version of a property, 'NewProp' doesn't exist. Here's how to do it:
public class YourClass : DNNClass
{
public string NewPropSubstitute
{
get {
string newPropVal = "your default if earlier DNN";
System.Reflection.PropertyInfo pi = this.GetType().GetProperty("NewProp");
if (pi != null)
newPropVal = (string)pi.GetValue(this, null);
return newPropVal;
}
}
}
That's a made-from-memory guess so it might not compile, but you get the idea. You don't necessarily have to get the DNN Version if you want - just try and get the property through reflection - if it's there, implicitly you've got the right version.
Of course this method assumes you can substitute in a value for a later-DNN property (or method) if the DNN version doesn't support it. But that all depends on what you're trying to do.
If you do want to find the DNN Version (version safe and always correct) you can use the code for that which is embedded in my version-safe jQuery inclusion code, linked from this blog post:
Using jQuery in DotNetNuke 5 and 6

Resources