Difference Between Axis 2 and Axis 2 Client in WSo2 Esb Property - wso2-api-manager

I'm new to WSO2 ESB. I cant seem to find the difference between axis2 and axis2-client scope of the property mediator.
Difference between
<property name="xyz" value="something" scope="axi2"> and
<property name="xyz" value="something" scope="axi2-client">

There are 5 different types of scopes for the Property Mediator, namely
Synapse (Default)
axis2
axis2-client
Transport
Operation
To understand the difference between Axis2 and Axis2-client you have to know about the Synapse scope first
Properties set in Synapse scope last as long as the request-response transaction exists.
Axis2 scope has a much shorter lifespan compared to Synapse and is mostly used to pass parameters to the Axis2 Engine
Axis2-client is similar to Synapse. The difference is that axis2-client scope can be accessed inside the mediate() method of a custom mediator. This custom mediator however needs to be configured using a Class Mediator

Related

Spring cloud stream kafka transactions in producer side

We have a spring cloud stream app using Kafka. The requirement is that on the producer side the list of messages needs to be put in a topic in a transaction. There is no consumer for the messages in the same app. When i initiated the transaction using spring.cloud.stream.kafka.binder.transaction.transaction-id prefix, I am facing the error that there is no subscriber for the dispatcher and a total number of partitions obtained from the topic is less than the transaction configured. The app is not able to obtain the partitions for the topic in transaction mode. Could you please tell if I am missing anything. I will post detailed logs tomorrow.
Thanks
You need to show your code and configuration as well as the versions you are using.
Producer-only transactions are discussed in the documentation.
Enable transactions by setting spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix to a non-empty value, e.g. tx-. When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction. When the listener exits normally, the listener container will send the offset to the transaction and commit it. A common producer factory is used for all producer bindings configured using spring.cloud.stream.kafka.binder.transaction.producer.* properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. #Scheduled method), you must get a reference to the transactional producer factory and define a KafkaTransactionManager bean using it.
#Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
Notice that we get a reference to the binder using the BinderFactory; use null in the first argument when there is only one binder configured. If more than one binder is configured, use the binder name to get the reference. Once we have a reference to the binder, we can obtain a reference to the ProducerFactory and create a transaction manager.
Then you would just normal Spring transaction support, e.g. TransactionTemplate or #Transactional, for example:
public static class Sender {
#Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a ChainedTransactionManager.

Why set up Serialization Context in Corda off node?

Recently, when I wanted to sign something with a certificate outside node, I got the below exceptions:
Caused by: java.lang.IllegalStateException: Expected exactly 1 of
{nodeSerializationEnv, globalSerializationEnv,
contextSerializationEnv, inheritableContextSerializationEnv} but got:
{}
https://github.com/corda/corda/blob/671a9d232cf1f29dbce4432bc91096ffd098a91c/core/src/main/kotlin/net/corda/core/serialization/internal/SerializationEnvironment.kt#L91
On debugging, I found it's first serialised and then signed. So, I had to set up the serialisation context to get it serialised and signed.
I have a limited understanding of why is it required. I understand that different contexts are required for P2P and RPC calls, but I'm not entirely sure can someone please fill me in with some background.
The internal library you are using to sign the certificate requires the certificate to be serialised first. In turn, this requires you to specify a serialisation context. A serialisation contexts defines how serialisation is performed in various situations, such as P2P, client-side RPC, server-side RPC, storage and checkpointing.
Note that these serialisation contexts are set for you automatically when running a node or a suite of node tests. You are only encountering this issue because you are using an internal library outside the context where it is expected to be used.
In your case, you should probably use globalSerializationEnv, which is the serialisation environment used for mock nodes and nodes created using the node driver. nodeSerializationEnv is used by the node itself, and contextSerializationEnv and inheritableContextSerializationEnv are used for various platform tests.
For educational purposes, it can be helpful to look at how the node sets up its serialisation framwork when it starts up (see https://github.com/corda/corda/blob/release-V3/node/src/main/kotlin/net/corda/node/internal/Node.kt):
nodeSerializationEnv = SerializationEnvironmentImpl(
SerializationFactoryImpl().apply {
registerScheme(KryoServerSerializationScheme())
registerScheme(AMQPServerSerializationScheme(cordappLoader.cordapps))
},
p2pContext = AMQP_P2P_CONTEXT.withClassLoader(classloader),
rpcServerContext = KRYO_RPC_SERVER_CONTEXT.withClassLoader(classloader),
storageContext = AMQP_STORAGE_CONTEXT.withClassLoader(classloader),
checkpointContext = KRYO_CHECKPOINT_CONTEXT.withClassLoader(classloader))

How to add another regulatory node and add some functionality to it in corda DLT?

I would like to add a new Notary/Regulatory node in my Cordapp application ,
which should perform some extra validation checks when transaction
is completed between two parties.
so that notary/regulatory will be finally checks for some things and stamp the transaction.
There are two options here:
Instead of using the default FinalityFlow to notarise, broadcast and record transactions, you can implement your own flow that performs some additional validation before the notarisation step. The limitation here is that the checks are not part of the notary service.
Create your own custom notary. Here, the custom validation checks happen within the notary service. The ability to do this is a recent change to the codebase, as such the documentation has not been updated to reflect the changes, however the source docs can be found on github:
Instructions for creating a custom notary service: https://github.com/corda/corda/blob/9e563f9b98b79a308d68ecb01c80ce61df048310/docs/source/tutorial-custom-notary.rst
Sample custom notary service code: https://github.com/corda/corda/blob/9e563f9b98b79a308d68ecb01c80ce61df048310/docs/source/example-code/src/main/kotlin/net/corda/docs/CustomNotaryTutorial.kt
As Roger notes, you could customise FinalityFlow/implement your own notary.
An alternative would be:
Add a new node to the network representing some regulator
Write the contract rules so that the regulator is a required signer on transactions
Have the regulator do additional checking of the transaction in their flow before signing

Same stateless bean in two ears

I have the same EJB module with a bean inside an EAR that is server side and an EAR that is the client side.
Can I have this situation?
Because I'm getting this error http://justpaste.it/gfs3
without understand how to fix it.
You have answer in the stack trace:
The short-form default binding 'com.demo.view.RitornaPersonaRemote'
is ambiguous because multiple beans implement the interface :
[RitornaPersonaSenzaClientEAR#RitornaPersonaSenzaClient.jar#RitornaPersona,
RitornaPersonaWebSenzaClientEAR#RitornaPersonaSenzaClient.jar#RitornaPersona].
Provide an interface specific binding or use the long-form default binding on lookup.]
If you are asking whether you may have same EJB jar in multiple project - the answer is yes you can. However during deployment you have to use long-form JNDI, provide different JNDI name for beans in other module or disable short names. You cannot register two beans under same name.
Long name would be in the form RitornaPersonaSenzaClientEAR#RitornaPersonaSenzaClient.jar#com.demo.view.RitornaPersonaRemote
See detailed info here - EJB 3.0 and EJB 3.1 application bindings overview
UPDATE
To disable short names perform the following steps:
Go to Application servers > server1 > Process definition > Java Virtual Machine > Custom properties
Define new custom property com.ibm.websphere.ejbcontainer.disableShortDefaultBindings with value * to disable short bindings for all apps or AppName1|AppName2 to just disable short bindings in selected apps.
Example default bindings are shown in SystemOut.log:
The binding location is: ejblocal:JPADepEar/JPADepEJB.jar/TableTester#ejb.TableTester
The binding location is: ejblocal:ejb.TableTester
The binding location is: java:global/JPADepEar/JPADepEJB/TableTester!ejb.TableTester
And with disableShortDefaultBindings property set there is no short form:
The binding location is: ejblocal:JPADepEar/JPADepEJB.jar/TableTester#ejb.TableTester
The binding location is: java:global/JPADepEar/JPADepEJB/TableTester!ejb.TableTester
There is a bug in the documentation and the correct property is com.ibm.websphere.ejbcontainer.disableShortDefaultBindings not com.ibm.websphere.ejbcontainer.disableShortFormBinding
In my case:- i did install abc.ear and xyz.ear both ear was independent do dependency with each other.
I was calling abc.ear using client-lookup but that was giving me
com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object.
[Root exception is com.ibm.websphere.ejbcontainer.AmbiguousEJBReferenceException: The short-form default binding
'com.ejb.abc' is ambiguous because multiple beans implement the interface :
[xyz-ear#rabc-ejb-1.0.jar#abcInrerfaceImpl, rabc-ear#rabc-ejb-1.0.jar
abcInrerfaceImpl]. Provide an interface specific binding or use the long-form default binding on lookup.]
my Solution was:-
i removed the abc.jar that was inside another application(xyz.ear)
C:\Program Files\IBM\WebSphere\AppServer\profiles\AppSrv01\wstemp\92668751\workspace\cells\mypc00Node01Cell\applications\xyz-ear.ear
'
Then solution client-lookup works fine.
To avoid this in future this is better practice to create separate node on your IBM-WAS server and install both application on different node.
So both application component will not mess up.

Weblogic Stateless Session methods invoked from different cluster member servers

My environment is Weblogic 10.3.5 on Solaris box. EJB is version 3 and there is anotation in the Bean class. Sorry for the confusion as the code is new to me and they also have deployment descriptor to generate ejb2 client code for another client to call, so it's not straigtforward.
I have a stateless session bean deployed to a cluster which has 2 server members say they are member1 and member2.
The session bean is deployed as clusterable as this is in the anotation:
homeIsClusterable = Constants.Bool.TRUE
This is how my Stand alone Java client lookup and call the EJB methods:
private void testBean(){
bean.methodA();
bean.methodB();
}
In the provider URL I ONLY specify the provider URL to ONE server member:
env.put(Context.PROVIDER_URL, "t3://member1:7005");
env.lookup("remote#the.bean.qulified.remoteinterface")
The Jndi name above is using the "mapped name + qualified remote interface class name", the mapped name is defined in the anotation.
Now the problem is, I found out, bean.methodA() got invoked in member1, and methodB() got invoked on member2, I found this from the logs of each server member. So it's always like this, member1 log will only show debug information from methodA, and member2 will only show debug information from methodB.
So here is my conceptual question - is this possible at all ? Are the above 2 methods supposed to be called on member1 only ? I know it's possible when you lookup through home interface you could possibly get a bean from either server, but in this case the ejb3 lookup is not going through the home interface(like in ejb2 we get a home and then call create method) but directly getting a remote object.
This caused issue as our methodB has a dependancy on methodA(methodA is doing some cleanup job, and then method re-initialize the cache), we need to do this on each cluster member.
This is just extra info but please focus on the above question from a concept perspective.
From the documentation:
When home-is-clusterable is True, the EJB can be deployed from multiple WebLogic Servers in a cluster. Calls to the home stub are load-balanced between the servers on which this bean is deployed, and if a server hosting the bean is unreachable, the call automatically fails over to another server hosting the bean.
I believe this is the case even when you explicitly only connect to a single member. This has some pretty good info in the Replica-Aware Home section:
http://www.informit.com/articles/article.aspx?p=101737&seqNum=8
It's more or less the whole point of clustering... a cluster appears as if it's a single server instance to a client.

Resources