AutoMapper and Quartz.Net server - missing type map configuration - automapper-3

I'm attempting to use AutoMapper within Jobs hosted by Quartz.Net server.
At service startup, I load all the mapping profiles, one of which has:
Mapper.CreateMap<Data.Models.ManufacturerAlias, Business.Models.ManufacturerAlias>();
In the Job, I call the Map<>, but I get the following error:
Exception: AutoMapper.AutoMapperMappingException: Missing type map configuration or unsupported mapping.
Mapping types:
ManufacturerAlias -> ManufacturerAlias
SmartBIM.Data.Models.ManufacturerAlias -> SmartBIM.Business.Models.ManufacturerAlias
Destination path:
ManufacturerAlias
Source value:
SmartBIM.Data.Models.ManufacturerAlias
Mapper.AssertConfigurationIsValid() is not giving me any exceptions.
Is this a threading issue - do I need to load the profiles at the Job.Execute()?
Thanks :)

Yes, it's threading issue. We had the similar. Static methods of AutoMapper are not thread-safe.

Related

Can we use standalone Spring Cloud Schema Registry with Confluent's KafkaAvroSerializer?

I have a project using Spring cloud stream with Kafka Streams binder. For the output of a stream, I am using Avro, with the Serde provided by Confluent(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde).
I am able to use it with the Confluent Schema Registry. Serialization and Deserialization takes place correctly.
However, I wanted to see if we can use the Spring Cloud Schema Registry Server instead of the Confluent one. I configured a standalone Schema Registry server and set the schema registry in my project to it (changed the schemaRegistryClient.endpoint and schema.registry.url properties).
When I tried it out, it seems Spring Cloud is able to work with the standalone server. It registers the schema available in the resources folder as a .avsc file. However, when I send a message, it seems the Confluent serializer continues to approach it as a Confluent Schema Registry (which has different REST endpoints from Spring Schema Registry). As a result, it gets a 405 response code.
We get the following exception(partial stack-trace)
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: <my-avro-schema>
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230)
It seems to me that there are two possibilities:
Spring Schema Registry Server can work only with the content-type provided by Spring (specified as content-type: application/*+avro) and not with the native Serde provided by Confluent, or
There is an issue with the project configuration.
Can someone help me figure out which one is it? If it is the second one, can someone point out what is wrong?
Each schema registry provider requires a proprietary SerDe library. For example, if you would like to integrate AWS Glue Schema Registry with Kafka, then you would need Amazon's SerDe stuff. Hence, the Confluent's SerDe library expects Confluent's Schema Registry at the address specified in the schema.registry.url property.

Same stateless bean in two ears

I have the same EJB module with a bean inside an EAR that is server side and an EAR that is the client side.
Can I have this situation?
Because I'm getting this error http://justpaste.it/gfs3
without understand how to fix it.
You have answer in the stack trace:
The short-form default binding 'com.demo.view.RitornaPersonaRemote'
is ambiguous because multiple beans implement the interface :
[RitornaPersonaSenzaClientEAR#RitornaPersonaSenzaClient.jar#RitornaPersona,
RitornaPersonaWebSenzaClientEAR#RitornaPersonaSenzaClient.jar#RitornaPersona].
Provide an interface specific binding or use the long-form default binding on lookup.]
If you are asking whether you may have same EJB jar in multiple project - the answer is yes you can. However during deployment you have to use long-form JNDI, provide different JNDI name for beans in other module or disable short names. You cannot register two beans under same name.
Long name would be in the form RitornaPersonaSenzaClientEAR#RitornaPersonaSenzaClient.jar#com.demo.view.RitornaPersonaRemote
See detailed info here - EJB 3.0 and EJB 3.1 application bindings overview
UPDATE
To disable short names perform the following steps:
Go to Application servers > server1 > Process definition > Java Virtual Machine > Custom properties
Define new custom property com.ibm.websphere.ejbcontainer.disableShortDefaultBindings with value * to disable short bindings for all apps or AppName1|AppName2 to just disable short bindings in selected apps.
Example default bindings are shown in SystemOut.log:
The binding location is: ejblocal:JPADepEar/JPADepEJB.jar/TableTester#ejb.TableTester
The binding location is: ejblocal:ejb.TableTester
The binding location is: java:global/JPADepEar/JPADepEJB/TableTester!ejb.TableTester
And with disableShortDefaultBindings property set there is no short form:
The binding location is: ejblocal:JPADepEar/JPADepEJB.jar/TableTester#ejb.TableTester
The binding location is: java:global/JPADepEar/JPADepEJB/TableTester!ejb.TableTester
There is a bug in the documentation and the correct property is com.ibm.websphere.ejbcontainer.disableShortDefaultBindings not com.ibm.websphere.ejbcontainer.disableShortFormBinding
In my case:- i did install abc.ear and xyz.ear both ear was independent do dependency with each other.
I was calling abc.ear using client-lookup but that was giving me
com.ibm.websphere.naming.CannotInstantiateObjectException: Exception occurred while the JNDI NamingManager was processing a javax.naming.Reference object.
[Root exception is com.ibm.websphere.ejbcontainer.AmbiguousEJBReferenceException: The short-form default binding
'com.ejb.abc' is ambiguous because multiple beans implement the interface :
[xyz-ear#rabc-ejb-1.0.jar#abcInrerfaceImpl, rabc-ear#rabc-ejb-1.0.jar
abcInrerfaceImpl]. Provide an interface specific binding or use the long-form default binding on lookup.]
my Solution was:-
i removed the abc.jar that was inside another application(xyz.ear)
C:\Program Files\IBM\WebSphere\AppServer\profiles\AppSrv01\wstemp\92668751\workspace\cells\mypc00Node01Cell\applications\xyz-ear.ear
'
Then solution client-lookup works fine.
To avoid this in future this is better practice to create separate node on your IBM-WAS server and install both application on different node.
So both application component will not mess up.

ejb jndi lookup throws ClassCastexception only when invoked from IBM Message Broker

When i try to make a remote ejb jndi look up, IBM message Broker throws ClassCastexception for the factory object.
But the same code works fine for a normal local java application and junit.Why this problem occurs when called only from IBM WMB
Context context = new InitialContext(ejbJndiProperties);
Object factoryObj = context.lookup("SampleBeanTAFJ/remote");
return (SampleBeanRemote) factoryObj;
This is often called by loading parts of the interface in a different classloader to the implementation classes.
I would use the env var:
IBM_JAVA_OPTIONS=-Dibm.cl.verbose=*
Then restart the broker, this will dump classloading trace to stdout / console.txt which might give you some clues.
What are the exact classes involved in the error and what jars are they stored in? Deployed to the EG or referenced via a SHARED-CLASSES? The exact details determine which classloaders should be in use here.

Workflow manager 1.0 and external dll

We are trying to publish a workflow and couple of activities to workflow manager. Our scenario is such that we need to create object of classes which resides in an external dll and those objects are calling a service(WCF) to fetch some data.
We have placed the DLLs in C:\Program Files\Workflow Manager\1.0\Workflow\WFWebRoot\bin\ and C:\Program Files\Workflow Manager\1.0\Workflow\Artifacts folders. Also we have created the AllowedTypes.XML file and have placed it in both the above folders.
The problem which we are facing is that when we declare a variable of a type which is in external DLL and try to invoke a method using InvokeMethod activity(we have also added the InvokeMethod activity type in AllowedTypes.xml) then we receive the following exception on the activity.publish statement.
Workflow XAML failed validation due to the following errors:
Cannot create unknown type '{http://schemas.microsoft.com/netfx/2009/xaml/activities}Variable({wf://workflow.windows.net/$Activities}ObjectType)'. HTTP headers received from the server - ActivityId: 33bf5b07-9eda-4f63-bf58-6d85cbfdcd55. NodeId: MachineID. Scope: /WFMgrSample. Client ActivityId : af5e1771-90f7-4610-a2be-5f7b7ce48ee8.
any idea on what is wrong here!!!

AMX Glassfish AppserverConnectionSource

I am able to connect Glassfish AS 3.1 via JMX with the following URL
service:jmx:rmi:///jndi/rmi://localhost:8686/jmxrmi
However I could not connect it via AMX. Here is the API I am using
amx-api-10.0-SNAPSHOT
I have realized in its source code that url is defined different than the following one. It is defined in class AppserverConnectionSource.java
private static final String APPSERVER_JNDI_NAME = "/management/rmi-jmx-connector";
When I try to connect AS AMX interface I am getting the following error.
Connecting using JMXServiceURL: service:jmx:rmi:///jndi/rmi://127.0.0.1:8686/management/rmi-jmx-connector
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.NameNotFoundException: management/rmi-jmx-connector
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:338)
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
at com.sun.appserv.management.client.AppserverConnectionSource.createNew(AppserverConnectionSource.java:412)
at com.sun.appserv.management.client.AppserverConnectionSource.getJMXConnector(AppserverConnectionSource.java:481)
at com.sun.appserv.management.client.AppserverConnectionSource.getMBeanServerConnection(AppserverConnectionSource.java:513)
at com.sun.appserv.management.client.ProxyFactory.getInstance(ProxyFactory.java:399)
at com.sun.appserv.management.client.ProxyFactory.getInstance(ProxyFactory.java:373)
at com.sun.appserv.management.client.AppserverConnectionSource.getDomainRoot(AppserverConnectionSource.java:528)
Too little to late I know but I can't resist. I have had the same issue and recreating the node instance doesn't do the trick. I looked into my node logs and found I was missing jars. Simply by adding those missing jars I was able to start the cluster with its node instance again.

Resources