Decorator and decorated classes are in different bean archives - decorator

My ear application which runs on JBoss AS 7.1.0.Final consists of two jars:
lib/one.jar
lib/two.jar
Both jars are CDI bean archives.
The two.jar depends on one.jar.
The class beeing decorated is in one.jar.
The decorator class is in two.jar
If the decorators are declared in beans.xml of two.jar, they are not enabled.
Does this work as expected?
Since the one.jar is developed independent of the two.jar and has no dependency to the two.jar, I expect the declaration (enabling) of the decorators is in the archive where the decorator classes are stored.
How is it possible to enable a decorator class without to change the archive with classes beeing decorated?

According to the spec, yes this is expected behaviour. CDI 1.1 is hoping to make this easier or at least clear it up a little. For any interceptor, decorator, or alternative you want to use, it must be enabled (beans.xml) in the archive you wish to use it.

Related

java.lang.LinkageError: loader constraint violation: loader

Here I am getting below error while connect using HttpPost,
Caused by: java.lang.LinkageError: loader constraint violation: loader
(instance of org/jboss/osgi/framework/internal/HostBundleClassLoader)
previously initiated loading for a different type with name
"org/apache/http/client/methods/HttpPost"
And I am using OSGI bundle so I have added all required dependent files.
So can anyone help me to resolve it?
The Java language is based on a single namespace. That is, the language is built around the concept that a class name is used only once. Classloaders were designed to load code over the internet but accidentally allowed the same class name to be used by 2 class loaders.
In OSGi, each bundle has a class loader that directly loads the classes from its own bundle but uses the class loader of other bundles for any imported classes.
In such a mesh of class loaders, you get the situation that you can load a class C from a Bundle that references a class X and a class Y loaded from other class loaders. Since they have different names that is ok. However, X could refer to class Z and Y could refer to Z, and they could come from different loaders. The original class C from Bundle A, therefore, can see Z from two different class loaders. And is a Linkage Error.
This mesh of classloaders works very well when all bundles are correct, you should never get this kind of errors when you do not hack your bundles. These errors are inevitably caused by complex setups that do not follow the OSGi rules and maintain the Bundle's manifest by hand.
In this case, the class name that can be seen multiple times is org.apache.http.client.methods.HttpPost. So you have a setup where there are multiple bundles exporting this class, which is the first place to look. Since you could start the bundle, the metadata is wrong. OSGi has special metadata that makes this error detected before you start the bundle, the so-called uses constraints.
On Apache Felix, you get an extensive analysis of the problem. If you could run your code on Apache Felix, that would be the easiest route. Looking at your error, you seem to be running on JBoss. They always have played a bit loose with the OSGi rules to make it easier to run enterprise software. Software that rarely does the work to provide OSGi metadata and is well known for its class loader hacks. (A lot of people are only after the Java Module System starting to understand what OSGi was doing and needed.)

Why is accesing services by classname good?

I've always worked in Symfony 2.2 and 2.8
Now glancing at the blog of the version 3.3 and I can't overcome the new feature of getting services by (fully qualified) class name:
// before Symfony 3.3
$this->get('app.manager.user')->save($user);
// Symfony 3.3
$this->get(UserManager::class)->save($user);
Before Symfony 3.3 the service container was a factory with the knowledge of how to instantiate the service class, with the great benefit of a factory: You can swith the old class to any other class, and as long as they both let's say implement the same interface, maybe you don't even have to touch anything else in your code. But if you ask for a service by class name, you have to refactor your whole code, and change the class name in every occurrence of the service. Either when directly accessing by $container->get() or by using typehint and autowire.
Considering these the old way service aliasing seem much more practical or am I just missing something? I know you still can use it the old way, I'm just wondering in what cases could one benefit from preferring this new method instead the classic one.
One of the main points about the new style of service naming it to avoid asking for services by name directly from the container. By typehinting them all (and having them instead created by the framework), then any service that is not being used at all (and is private, so not get-able), can be removed. Anything that is hinted via the container and does not exist will immediately fail on the container building step as well - it would not be possible to rename a service and forget to also change all the other uses of it.
With so much being injected into the controllers (or a controller action) as well, unit testing the services, and even controllers is also more controllable - there wouldn't be any other things that are being pulled from the container within a test.
As for the transition, because of the container compilation step, Symfony is very good about being able to say if there is anything wrong, or at least deprecated. It's not hard to mark all the current services as public with just a small Yaml snippet at the top of each services.yml file (or anywhere else they are defined).
It will be a while until most of the 3rd party bundles and other supporting libraries will be fully 4.0 compatible, but it was also the case for the breaking changes between 2.8 & 3.0. This is a large part of the reason why 2.8 & now 3.4 are long-term-supported versions - with 3.4 supported to November 2021, giving plenty of time to upgrade away from the deprecations and update the 3rd party bundles.

Does glassfish resolve the path to libraries correctly?

I am interested in this question due to the problem I described here. How does Glassfish look for the required classes anyway? Suppose there are two libraries in pom.xml of the application (in dependencies), one is declared with scope provided, another is declared with standard scope.
Therefore, I have two libraries - A.jar is in Glassfish lib folder, B.jar is in WEB-INF/lib of the war module that I deploy.
What is the order of resolving the dependencies here? I assume that:
First look in the WEB-INF/lib folder if any jar matches the class.
After that look in Glassfish/lib folder if any jar matches the class.
Is that correct? Is the situation when class in A.jar refences a class in B.jar, legal for such a configuration, and vice versa?
To be more specific, I have Glassfish 2.1.
According to class loader documentation for GF2 I would say vice versa.
Note that the class loader hierarchy is not a Java inheritance hierarchy, but a delegation hierarchy. In the delegation design, a class loader delegates class loading to its parent before attempting to load a class itself. If the parent class loader cannot load a class, the class loader attempts to load the class itself. In effect, a class loader is responsible for loading only the classes not available to the parent. Classes loaded by a class loader higher in the hierarchy cannot refer to classes available lower in the hierarchy.
note: Related documentation for GF3.1 is here and here
However you may influence the behavior through glassfish specific descriptor with
<class-loader delegate="true/false"/>
You can find more about it following the first link

How do I use a custom JsonConverter with NServiceBus?

I have a class that needs a custom JsonConverter to properly be deserialized. It is a value object (in DDD terminology), and I need to include it in a message being sent with NServiceBus when using Json serialization.
The problem is, since NServiceBus internalizes their copy of Json.Net, I have to use the JsonConverter base class that is included in NSB, but it has been marked "internal" during the merge.
This basically prevents you from hooking any custom serialization code into NSB. Is this by design? Is there a recommended workaround?
If you are having problems with the merged assemblies you can find the core only assemblies on the downloads page on github.

ASP.NET and Unity

First of all, is there a complete reference on Microsoft Unity?
I noticed today that, when I call "Configure" on the "UnityConfigurationSection" it configures and prepares all configuration mappings.
What if a class has a dependency on an object registered inside Unity. Does this class itself needs to be registered by Unity so that, Unity injects its dependency?
I am afraid that Unity would not inject a dependency on an Object, if that object is not registered into Unity. This is the case with "Page" class in ASP.NET.
Thanks
Unity has some well defined default behavior when working on classes that aren't registered ahead of time.
In the absence of registration, the container will look for the longest constructor, and it'll also look for attributes on the type ([Dependency] being the main one) to figure out what properties to inject.
If you don't want to use the attributes or the defaults don't match what you want, you'll need to configure the container to do the what you want.

Resources