I'm using Jboss JBPM version 7.24. I'm trying to create a job with a java class provided as part of a project imported in JBPM Business Central. The project is built/deployed successfully.
The job is created using UI, business-central->jobs-> New Job. In the UI, the fully qualified java class name is provided as "org.jbpm.examples.cmd.UserCommand".
However it gives this error:
"Job must have valid class that defines Job Type"
.
If the job is created using "org.jbpm.executor.commands.LogCleanupCommand" part of jbpm-executor, it gets created successfully.
Please suggest a workaround.
The job is defined by its class and Parameters. The class to define the type for you should be "org.jbpm.executor.commands.LogCleanupCommand"
What parameters do you put for LogCleanupCommand and is there a log details giving further indicators why it complains? Example parameters are:
"SkipProcessLog", "false"
"SkipTaskLog", "false"
"SkipExecutorLog", "false"
"SingleRun", "true"
"ForProcess", processId
"OlderThan", olderThan
Related
I'm writing integration tests using MockMvcBuilders.webAppContextSetup() in JUnit 5.
I'm extending with Sam Brannen's SpringExtension and a MockitoExtension. (Really, I'm using the composed SpringJUnitJupiterWebConfig)
I get this output when running the tests (edited):
java.lang.IllegalStateException: Failed to load ApplicationContext
BeanDefinitionStoreException: Failed to process import candidates for configuration class [com.example.myapp.config.SomeConfig];
Could not resolve placeholder 'someEnvVar' in string value "classpath:/com/example/myapp/config/${someEnvVar}/custom.properties"
(This is in an internal company library on which my application depends.)
Seems clear that I need to set the environment property before the container starts up. But I'm struggling to discover how to hook into that with JUnit5.
I want to add PropertySources to the environment, I assume, but to get the environment, I have to get the application context, and in so doing, it instantiates, erroring out before I can do anything with it.
I tried creating my own extension and getting a handle to the environment during the BeforeAllCallback.
I'm getting the feeling that I'm going about it all wrong and I'm missing something fundamental.
I just needed to use the #TestPropertySource annotation on my test class.
What is the point of using compiler passes in Symfony?
When we should use Extension class and when Compiler Passes in Symfony?
They come with services definition.
By creating a compiler pass, you are able to update the arguments passed to services.
It's most often done with tagged services.
Also, It can be used for :
Creating new services that require information about other defined services before being defined.
Swapping or adding arguments to a service that you did not write.
Creating and modifying parameters in the container.
I used a compiler pass to register a Factory that make me able to override the doctrine Repository.
You can see the code for more comprehension of how it works:
https://gist.github.com/chalasr/77be8eee5e3ecd3c06ec
Update
Thank's to #Sruj, I added the part I've forgotten about the Extension
Extension are part of dependency injection too, especially of the configuration.
Its primary role is to load the configuration of services across the bundles of your application.
Instead of load your configuration manually by using imports, you can create an extension that does it for you. All your services configuration are registered from your bundle and shared in your whole application.
When you register a vendor in your app configuration, the service container extension of the vendor is invoked.
See "Importing configuration via container extensions" part of the documentation
As per the oozie documentation, I can create a Custom Oozie ActionExecutor.
To do this, I need to create a class that extends ActionExecutor. The class needs to be packaged as a Jar. This jar should be placed under the lib directory of oozie server. This part is clear.
However, where should I place the XSD file that defines the Action? Searched in the oozie server locations and I could not find a clue. Any help?
Per the Oozie documentation:
The XML schema (XSD) for the new Actions should be added to
oozie-site.xml, under the property
'oozie.service.WorkflowSchemaService.ext.schemas'. A comma separated
list for multiple Action schemas.
I've seen many manifestations of ways to use the user class path as precedent to the hadoop one. Often times this is done if an m/r job needs a specific version of a library that hadoop coincidentally already uses an older version of (for example jackson's json parser or commons http , etc.)
In any case : I've seen :
mapreduce.task.classpath.user.precedence
mapreduce.task.classpath.first
mapreduce.job.user.classpath.first
Which one of these parameters is the right one to set in my job configuration, in order to force mappers and reducers to have a class path which puts my user defined hadoop_classpath jars BEFORE the hadoop default dependency jars ?
By the way, this is related to this question :
Dynamodb requestHandler acception which I recently have found is due to a jar conflict.
So, assuming you're using 0.20.203, this is handled in the TaskRunner.java code as follows:
The property you're looking for is on line 94 - mapreduce.user.classpath.first
Line 214 is where the call is made to build the list of classpaths, which delegates to a method called getClassPaths(..)
getClassPaths() is defined on line 524, and you should be able to see that the configuration property is used to decide on whether your job + dist cache libraries, or the hadoop libraries go on the classpath first
For other versions of hadoop, you're best to check the TaskRunner.java class to confirm the name of the config property after all this is a "semi hidden config":
static final String MAPREDUCE_USER_CLASSPATH_FIRST =
"mapreduce.user.classpath.first"; //a semi-hidden config
As in the latest Hadoop version (2.2+), you should set:
conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_USER_CLASSPATH_FIRST, true);
These settings work for referencing classes of external jars only in your mapper or reducer tasks. If, however, you are using these in, for example a customized InputFormat, it will fail to load the class. A way to make sure this also works everywhere (in MR2) is exporting this setting when submitting your job:
export HADOOP_USER_CLASSPATH_FIRST=true
I had the same issue and the parameter that worked for me on Hadoop Version 0.20.2-cdhu03 is "mapreduce.task.classpath.user.precedence"
This setting is tested not work on CDH3U3, following answer is from Cloudera team:
// JobConf job = new JobConf(getConf(), MyJob.class);
// job.setUserClassesTakesPrecedence(true);
http://archive.cloudera.com/cdh/3/hadoop/api/org/apache/hadoop/mapred/JobConf.html#setUserClassesTakesPrecedence%28boolean%29
In the MapR distribution, the property is "mapreduce.task.classpath.user.precedence"
http://www.mapr.com/doc/display/MapR/mapred-site.xml
<property>
<name>mapreduce.task.classpath.user.precedence</name>
<value>true</value>
<description>Set to true if user wants to set different classpath. (AVRO) </description>
</property>
jobConf.setUserClassesTakesPrecedence(true);
I have a Java EE web application (WebSphere 7). The code looks up EJBs on the app server by doing JDNI look ups.
In my development environment (Eclipse / IBM RAD), the EAR project is called customerEAR, so the JNDI lookup is like this:
ejblocal:customerEAR.ear/customerEJB.jar/CustomerService#com.mydomain.service.CustomerServiceLocal
But when the EAR is deployed on the production application server, the lookup fails and the log file states that the EJB is available at:
ejblocal:server1-dev-customerEAR.ear/customerEJB.jar/CustomerService#com.mydomain.service.CustomerServiceLocal
It's possible that the guy deploying the EAR to the server is actually renaming the EAR file from customerEAR.ear to server1-dev-customerEAR.jar -- or maybe WebSphere is renaming stuff on deployment? I can't get hold of the deployment guy to find out for sure.
But, regardless, my question is:
Is there a cleaner way to do the lookup of the EJBs? Like, by using a shorter lookup name, or a "relative path" instead of the "full path"?
Any suggestions are greatly appreciated, thanks!
Rob
You can define a name for your remote EJB using annotations in your bean class or define it in ejb-jar.xml.
#Stateless(mappedName = "TestBean")
#Remote(TestBeanRemote.class)
public class TestBean implements TestBeanRemote
{
//...
}
Lookup on client:
TestBeanRemote testBeanRemote = (TestBeanRemote)new InitialContext().lookup("TestBean");
If you don't use "mappedName" some Application Servers add some suffix to your Bean name to autogenerate a name. So better use "mappedName" to define it server independent.
If you want shorter names, create an ibm-ejb-jar-bnd.xml and specify a binding rather than leaving it to the product to generate an unambiguous one. See the section entitled "Binding file Example 1".
Depending on whether or not you have multiple EJBs implementing your interface, you can also use "short form" bindings. See the section entitled "Default binding pattern".
Since the deployment guy insists on renaming my EAR before deploying it, I managed to get this to work, I managed to get it working by shortening this:
ejblocal:customerEAR.ear/customerEJB.jar/CustomerService#com.mydomain.service.CustomerServiceLocal
to this:
ejblocal:com.mydomain.service.CustomerServiceLocal