Spring Batch new guy here,so, expect anything.
I have a job that runs successfully to completion with given job parameters paymentPeriod=1. Though the requirement expects the job to be able to rerun with same job parameters paymentPeriod=1 .
I can run job first time with parameter paymentPeriod=1 with following endpoint from postman
#RestController
#RequestMapping(value = "api/jobs")
public class JobController {
#Autowired
private JobOperator jobOperator;
#RequestMapping(value = "/pay/{paymentPeriod}", method = RequestMethod.POST)
#ResponseStatus(HttpStatus.ACCEPTED)
public void launchPaymentJob(#PathVariable Integer paymentPeriod) throws Exception {
this.jobOperator.start("paymentJob", String.format("paymentPeriod=%s", paymentPeriod));
}
}
Though when i rerun with same parameters i get JobInstanceAlreadyExistsException cannot start a job instance that already exists with name paymentJob and parameters=paymentPeriod=1
Going by the concept of Job Instances and Job Executions in Spring Batch, you can't start a COMPLETED job instance again though you can launch same instance of job again & again till its not COMPLETE ( and few more job statuses ).
Job instance uniqueness is achieved by jobId & job parameters.
So if you don't vary your parameters, you can't start a completed job with same instance.
If your REST End Point is restricted to take same value again and again, you need to add a unique parameter within that method and that can very well be - java.lang.System.currentTimeMillis().
You can use any other unique value though current system time is very convenient.
Convert java.lang.System.currentTimeMillis() to String and append to your parameters with something like , String.format("paymentPeriod=%s", paymentPeriod+java.lang.System.currentTimeMillis()) instead of what you are currently.
Hope you get the idea.
Related
I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.
What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.
I'm trying to find the way to keep the database updated, but the method which does it consumes a lot of time so I try to create a background task to do it.
I searched for solutions and I read this article of different options to run background processes: https://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
But I don't know what's is the best solution out of those, like I'm trying to execute it outside the application. I found something about creating a Windows Service too, but I don't know how, I didn't manage to find some good examples.
What is the best way to keep my database updated everytime I access the application without losing the time it consumes? If you can help me to see the light I would appreciate that so much.
I'm really happy with FluentScheduler, which is handling all my mission-critical scheduling. As well as firing jobs on a scheduled basis, it can also do them on demand, like so:
// Define your job in its own class
public abstract class MyJob : IJob
{
public void Execute()
{
// Do stuff here...
}
}
// Schedule your job at startup
var runAt = DateTime.Today.AddHours(1); // 1am
if (runAt<DateTime.Now)
runAt = runAt.AddDays(1);
Schedule<MyJob>()
.WithName("My Job Name") // Job name, required for manually triggering
.NonReentrant() // Only allow one instance to run at a time
.ToRunOnceAt(runAt) // First execution date/time
.AndEvery(1).Days().At(runAt.Hour, runAt.Minute); // Run every day at the same time
// To manually trigger your job
ScheduledJobRegistry.RunTaskAsync("My Job Name");
I have the scheduled jobs running in a windows Service and use SignalR as a means of triggering them remotely from an MVC Web App when required.
You can use an async method. Just use a void instead of Task.
public async void LongRunningMethod () {
...
// Insert long running code here
...
}
Then call it and it will execute in the background. Be aware that you can have hidden exceptions without proper were handling.
You can also use Hangfire which is a pretty awesome background task scheduler
Here is an example of using Hangfire to run a daily task
RecurringJob.AddOrUpdate(() => Console.Write("Easy!"), Cron.Daily);
OOzie 4.2 has provided the documentation for killing co-ordinator actions, however I am not able to know the exacts values to be passed rangetType and scope.
Could any one elaborate it or provide a concrete example.
public List<CoordinatorAction> kill(String jobId,
String rangeType,
String scope)
throws OozieClientException
You can refer to the source code of OozieClient where also this API is used to see the possible values. In particular see the implementation of following method
private void jobCommand(CommandLine commandLine) throws IOException, OozieCLIException {
}
This same API is used by Oozie Command line tool which can be referred from here.
rangeType : Possible values 'date' or 'action'
scope : Possible values 'date' or 'action'
$oozie job -kill [-action 1, 3-4, 7-40] [-date
2009-01-01T01:00Z::2009-05-31T23:59Z, 2009-11-10T01:00Z,
2009-12-31T22:00Z]
Either -action or -date should be given. If neither -action nor -date
is given, the exception will be thrown. Also if BOTH -action and -date
are given, an error will be thrown. Multiple ranges can be used in
-action or -date. See the above example. If one of the actions in the given list of -action is already in terminal state, the output of this
command will only include the other actions. The dates specified in
-date must be UTC. Single date specified in -date must be able to find an action with matched nominal time to be effective. After the command
is executed the killed coordinator action will have KILLED status.
I'm trying to build some tests for my service objects.
My service file is as follows...
class ExampleService
def initialize(location)
#location = coordinates(location)
end
private
def coordinates(location)
Address.locate(location)
end
end
I want to test that the private methods are called by the public methods. This is my code...
subject { ExampleService.new("London") }
it "receives location" do
expect(subject).to receive(:coordinates)
subject
end
But I get this error...
expected: 1 time with any arguments
received: 0 times with any arguments
How to test service object methods are called?
Short answer: Don't test at all
Long answer: After seen Sandi Metz advice on testing, you will be agree, and you will want to test the way she does.
This is the basic idea:
The public methods of your class (the public API), must be tested
The private methods don't need be tested
Summary of tests to do:
The incoming query methods, test the result
The incoming command methods, test the direct public side effects
The outgoing command methods, expect to send
Ignore: send to self, command to self, and queries to others
Taken from the slides of the conference.
In your first example, your subject has already been instantiated/initialized (by being passed to expect, invoking coordinates in the process) by the time you've set expectations on it, so there is no way for the expectation to receive :coordinates to succeed. Also, as an aside, subject is memoized, so there won't be an additional instantiation in the line that follows.
If you want to make sure your initialization calls a particular method, you could use the following:
describe do
subject { FoursquareService.new("London") }
it "receives coordinates" do
expect_any_instance_of(FoursquareService).to receive(:coordinates)
subject
end
end
See also Rails / RSpec: How to test #initialize method?
I have a really long integration test that simulates a sequential process involving many different interactions with a couple of Java servlets. The servlets' behavior depends on the values of the parameters being posted in the request, so I wanted to test every permutation to make sure my servlets are behaving as expected.
Currently, my integration test is in one long function called "testServletFunctionality()" that goes something like this:
//Configure a mock request
//Post request to servlet X
//Check database for expected changes
//Re-configure mock request
//Re-post request to servlet X
//Check database for expected changes
//Re-configure mock request
//Post request to servlet Y
//Check database for expected changes
...
and each configure/post/check step has about 20 lines of code, so the function is very long.
What is the proper way to break up or organize a long, sequential, repetitive integration tests like this?
The main problem with integration tests (IT) is usually that the setup is very expensive. Tests usually should not depend on each other and the order in which they are executed but for ITs, test #2 will always fail if you don't run test #1 (login).
Sad.
The solution is to treat these tests like production code: Split long methods into several smaller ones, build helper objects that perform certain operations, so you can do this in your test:
#Test
public void someComplexText() throws Exception {
new LoginHelper().loginAsAdmin();
....
}
or move this code into a base test class:
#Test
public void someComplexText() throws Exception {
loginHelper().loginAsAdmin();
....
}