OOzie 4.2 has provided the documentation for killing co-ordinator actions, however I am not able to know the exacts values to be passed rangetType and scope.
Could any one elaborate it or provide a concrete example.
public List<CoordinatorAction> kill(String jobId,
String rangeType,
String scope)
throws OozieClientException
You can refer to the source code of OozieClient where also this API is used to see the possible values. In particular see the implementation of following method
private void jobCommand(CommandLine commandLine) throws IOException, OozieCLIException {
}
This same API is used by Oozie Command line tool which can be referred from here.
rangeType : Possible values 'date' or 'action'
scope : Possible values 'date' or 'action'
$oozie job -kill [-action 1, 3-4, 7-40] [-date
2009-01-01T01:00Z::2009-05-31T23:59Z, 2009-11-10T01:00Z,
2009-12-31T22:00Z]
Either -action or -date should be given. If neither -action nor -date
is given, the exception will be thrown. Also if BOTH -action and -date
are given, an error will be thrown. Multiple ranges can be used in
-action or -date. See the above example. If one of the actions in the given list of -action is already in terminal state, the output of this
command will only include the other actions. The dates specified in
-date must be UTC. Single date specified in -date must be able to find an action with matched nominal time to be effective. After the command
is executed the killed coordinator action will have KILLED status.
Related
Kindly explain the difference between the below two codding snippets, both are fetching the command value but in different ways. Which is the standard way to fetch the command in the Contract class?
final CommandData command = tx.getCommand(0).getValue();
final CommandWithParties<Commands> command = requireSingleCommand(tx.getCommands(), Commands.class);
final Commands commandData = command.getValue();
What is the benefit of using TypeOnlyCommandData?
The typical approach to define commands can be seen in the IOU example:
public interface Commands extends CommandData {
class Create implements Commands {}
}
Notice that the IOU example uses requireSingleCommand() because it expects that the transaction will have only one command; otherwise it will throw an error.
So if you are creating a transaction that has multiple types of states, you cannot use the above function; because the transaction will have multiple commands (one per each state type), instead you can extract the commands that are related to your state type (see example here); then do the verification against them.
I have a list of http endpoints each performing a task on its own. We are trying to write an application which will orchestrate by invoking these endpoints in a certain order. In this solution we also have to process the output of one http endpoint and generate the input for the next http enpoint. Also, the same workflow can get invoked simultaneously depending on the trigger.
What I have done until now,
1. Have defined a new operator deriving from the HttpOperator and introduced capabilities to write the output of the http endpoint to a file.
2. Have written a python operator which can transfer the output depending on the necessary logic.
Since I can have multiple instances of the same workflow in execution, I could not hardcode the output file names. Is there a way to make the http operator which I wrote to write to some unique file names and the same file name should be available for the next task so that it can read and process the output.
Airflow does have a feature for operator cross-communication called XCom
XComs can be “pushed” (sent) or “pulled” (received). When a task pushes an XCom, it makes it generally available to other tasks. Tasks can push XComs at any time by calling the xcom_push() method.
Tasks call xcom_pull() to retrieve XComs, optionally applying filters based on criteria like key, source task_ids, and source dag_id.
To push to XCOM use
ti.xcom_push(key=<variable name>, value=<variable value>)
To pull a XCOM object use
myxcom_val = ti.xcom_pull(key=<variable name>, task_ids='<task to pull from>')
With bash operator , you just set xcom_push = True and the last line in stdout is set as xcom object.
You can view the xcom object , hwile your task is running by simply opening the tast execution from airflow UI and clicking on the xcom tab.
Spring Batch new guy here,so, expect anything.
I have a job that runs successfully to completion with given job parameters paymentPeriod=1. Though the requirement expects the job to be able to rerun with same job parameters paymentPeriod=1 .
I can run job first time with parameter paymentPeriod=1 with following endpoint from postman
#RestController
#RequestMapping(value = "api/jobs")
public class JobController {
#Autowired
private JobOperator jobOperator;
#RequestMapping(value = "/pay/{paymentPeriod}", method = RequestMethod.POST)
#ResponseStatus(HttpStatus.ACCEPTED)
public void launchPaymentJob(#PathVariable Integer paymentPeriod) throws Exception {
this.jobOperator.start("paymentJob", String.format("paymentPeriod=%s", paymentPeriod));
}
}
Though when i rerun with same parameters i get JobInstanceAlreadyExistsException cannot start a job instance that already exists with name paymentJob and parameters=paymentPeriod=1
Going by the concept of Job Instances and Job Executions in Spring Batch, you can't start a COMPLETED job instance again though you can launch same instance of job again & again till its not COMPLETE ( and few more job statuses ).
Job instance uniqueness is achieved by jobId & job parameters.
So if you don't vary your parameters, you can't start a completed job with same instance.
If your REST End Point is restricted to take same value again and again, you need to add a unique parameter within that method and that can very well be - java.lang.System.currentTimeMillis().
You can use any other unique value though current system time is very convenient.
Convert java.lang.System.currentTimeMillis() to String and append to your parameters with something like , String.format("paymentPeriod=%s", paymentPeriod+java.lang.System.currentTimeMillis()) instead of what you are currently.
Hope you get the idea.
In my workflow I call a service which returns me a List. The problem is that in my workflow i use a AddToCollection Activitie to add a new string to the collection, but i get an error right when i get to the activity.
Debugging and checking i got to theworkflow logs and now I see that the error is that "Collection was of a fixed size." Here's the complete log:
System.SZArrayHelper.Add[T](T value)
System.Activities.Statements.AddToCollection`1.Execute(CodeActivityContext context)
System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
System.Activities.ActivityInstance.Execute(ActivityExecutor executor, BookmarkManager bookmarkManager)
System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
What i don't get is why (and how this happened)? Is this a bug? I specifically return a List... why does it says it's fixed sized?!?
EDIT 1: There's something really weird... since my original workflow was quite big I created a new, smaller one, just to reproduce this error... and I can't!
My guess is that WCF is serializing your list to array before sending it over the wire. Don't know if it is possible to avoid.
Anyway, check this and this
You can also create a new Variable on your workflow and assign a List to it when you receive it from the service:
listWFVariable = new List<string>(arrayReceivedFromWebService);
Now you can make Add operations to it.
I have workflow with correlation. When I call twice some method with the same parameters i have the following error:
The execution of an InstancePersistenceCommand was interrupted by a key collision. The instance key with value 'bcd874f3-1d47-d9f0-de51-4487d1e4e12e' could not be associated to the instance because it is already associated to a different instance.
Is there any way to delete previous workflow and start new?
You can add a WorkflowControlEndpoint to the WorkflowServiceHost and use the WorkflowControlClient to terminate the existing workflow before starting a new one with the same correlation key.