Submit a task to an ExecutorService using a SheduleExecutorService - javafx

I'm developing a JavaFX application for read data from a serial device and show a notification when a new device is connected to the computer.
I have a task DeviceDetectorTask which scans all the ports and creates an event when a new device is connected. This task must be submited every 3 seconds.
When a device is detected the user can press a button to read all the data contained in it. This is performed by another task ReadDeviceTask. At this point and while the ReadDeviceTask is running scan operations should not be performed (I cannot read and scan one port at the same time). So only one of the two task can be running at a time.
My actual solution is:
public class DeviceTaskQueue {
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void submit(Runnable task) {
executorService.submit(task);
}
}
public class ScanScheduler {
private ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
public void start() {
AddScanTask task = new AddScanTask();
executor.scheduleAtFixedRate(task, 0, 3, TimeUnit.SECONDS);
}
}
public class AddScanTask implements Runnable {
#Autowired
DeviceTaskQueue deviceTaskQueue;
#Override
public void run() {
deviceTaskQueue.submit(new DeviceDetectorTask());
}
}
public class ViewController {
#Autowired
DeviceTaskQueue deviceTaskQueue;
#FXML
private readDataFromDevice() {
deviceTaskQueue.submit(new ReadDeviceTask());
}
}
My question is: is it ok to add a task to the ExecutorService from the task AddScanTask which has been scheduled by the ScheduledExecutorService?

Yes, An Executor May Post Task To Another Executor
To answer your simple question in last line:
is it ok to add a task to the ExecutorService from the task AddScanTask which has been scheduled by the ScheduledExecutorService?
Yes. Certainly you can submit a Callable/Runnable from any other code. That the submitting code happens to be running from another executor is irrelevant, as code run from an executor is still “normal” Java code, just running on a different thread.
That is the whole point of the executor, to handle the juggling of threads in a manner convenient to you the programmer. Making multi-threaded coding easier and less error-prone is why these classes were added to Java. See the extremely helpful book, Java Concurrency in Practice by Brian Goetz et al. And see other writings by Goetz.
In your case you have two executors each with their own thread, each executing a series of submitted tasks. One has tasks submitted automatically (timed) while the other has tasks submitted manually (arbitrarily). Each executes on their own thread independent of one another. With multiple cores they may execute simultaneously.
Therein lies the bigger problem: In your scenario you don't want them to be independent. You want the reading tasks to block the scanning tasks.
Bigger Problem
The problem you present is that a regularly occurring activity (scanning) must halt when an arbitrary event (reading) happens. That means the two activities must coordinate with one another. The question is how to coordinate.
Semaphores
When the arbitrary event is happening, it should raise a flag. The recurring activity, when it runs, should always check for that flag. If raised, wait until the flag lowers before proceeding with scan. The ScheduledExecutorService is designed for this, tolerating a task that may run for a time longer than the scheduled period. If one execution of the task runs long, the SES does not run again, so it does not pile up a backlog of executions. That is just the behavior you want.
Vice versa, if the recurring activity is executing, it should raise a flag. The arbitrary event’s first to-do item is to check for that flag. If raised, wait until lowered. Then proceed, first raising its own flag and then proceeding with the task at hand (scanning).
Perhaps your scenario should be designed with a single flag rather than scanner and reader each having their own. I would have to think about it more and probably know more about your scenario.
The technical term for such flags is semaphore.
Unfortunately your comment says you cannot alter the scanner’s source code. So you cannot implement the semaphores and coordinate the activities. So I am stuck, cannot see a solution.
Hack
Given your frozen code, one hack solution, which I do not recommend, is that the regularly occurring activity (the scanning) not actually do the work but instead post a scanning task on another thread (another executor). That other executor would also be the same executor used to post the arbitrary activity (the reading). So there is one single queue of to-do items, a mix of scanning and reading jobs, submitted to a single-thread executor. The single-thread means they get done one at a time in sequence of their submission.
I do not like this hack because if any of the to-do items takes a long while you will begin to accumulate a backlog. That could be a mess.
By the way, no need for the DeviceTaskQueue in your example code. Just call the instance of the ExecutorService directly to submit a task. That is the job of an ExecutorService, and wrapping it adds no value that I can see.

Related

KafkaMessageListenerContainer how to do nack for specific error

I am using KafkaMessageListenerContainer with (KafkaAdapter).
How can I "nack" offsets in case of specific error, so the next poll() will take them again?
properties.setAckMode(ContainerProperties.AckMode.BATCH);
final KafkaMessageListenerContainer<String, String> kafkaContainer = new KafkaMessageListenerContainer<>(consumerFactory , properties);
kafkaContainer.setCommonErrorHandler(new CommonErrorHandler() {
#Override
public void handleBatch(Exception thrownException, ConsumerRecords<?, ?> data, Consumer<?, ?> consumer, MessageListenerContainer container, Runnable invokeListener) {
CommonErrorHandler.super.handleBatch(thrownException, data, consumer, container, invokeListener);
}
});
Inside handleBatch I am detecting the exception, for that specific exception I would like to do nack.
Tried to throw from there RuntimeException.
using springboot 2.7
Use the DefaultErrorHandler - it does exactly that (the whole batch is retried according to the back off). You can classify which exceptions are retryable or not.
If you throw a BatchListenerFailedException you can specify exactly which record in the batch had the failure and only retry it (and the following records).
EDIT
If any other type of exception is thrown, the DefaultErrorHandler falls back to using a FallbackBatchErrorHandler which calls ErrorHandlingUtils.retryBatch() which, pauses the consumer and redelivers the whole batch without seeking and re-polling (the polls within the loop return no records because the consumer is paused).
See the documentation. https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
This is required, because there is no guarantee that the batch will be fetched in the same order after a seek.
This is because we need to know the state of the batch (how many times we have retried). We can't do that if the batch keeps changing; hence the algorithm I described above.
To retry indefinitely you can, for example, use a FixedBackOff with Long.MAX_VALUE in the maxAttempts property. Or use an ExponentialBackOff with no termination.
Just be sure that the largest back off (and time to process a batch) is significantly less than max.poll.interval.ms to avoid a rebalance.

How to make command to wait until all events triggered against it are completed successfully

I have came across a requirement where i want axon to wait untill all events in the eventbus fired against a particular Command finishes their execution. I will the brief the scenario:
I have a RestController which fires below command to create an application entity:
#RestController
class myController{
#PostMapping("/create")
#ResponseBody
public String create(
org.axonframework.commandhandling.gateway.CommandGateway.sendAndWait(new CreateApplicationCommand());
System.out.println(“in myController:: after sending CreateApplicationCommand”);
}
}
This command is being handled in the Aggregate, The Aggregate class is annotated with org.axonframework.spring.stereotype.Aggregate:
#Aggregate
class MyAggregate{
#CommandHandler //org.axonframework.commandhandling.CommandHandler
private MyAggregate(CreateApplicationCommand command) {
org.axonframework.modelling.command.AggregateLifecycle.apply(new AppCreatedEvent());
System.out.println(“in MyAggregate:: after firing AppCreatedEvent”);
}
#EventSourcingHandler //org.axonframework.eventsourcing.EventSourcingHandler
private void on(AppCreatedEvent appCreatedEvent) {
// Updates the state of the aggregate
this.id = appCreatedEvent.getId();
this.name = appCreatedEvent.getName();
System.out.println(“in MyAggregate:: after updating state”);
}
}
The AppCreatedEvent is handled at 2 places:
In the Aggregate itself, as we can see above.
In the projection class as below:
#EventHandler //org.axonframework.eventhandling.EventHandler
void on(AppCreatedEvent appCreatedEvent){
// persists into database
System.out.println(“in Projection:: after saving into database”);
}
The problem here is after catching the event at first place(i.e., inside aggregate) the call gets returned to myController.
i.e. The output here is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in myController:: after sending CreateApplicationCommand
in Projection:: after saving into database
The output which i want is:
in MyAggregate:: after firing AppCreatedEvent
in MyAggregate:: after updating state
in Projection:: after saving into database
in myController:: after sending CreateApplicationCommand
In simple words, i want axon to wait untill all events triggered against a particular command are executed completely and then return to the class which triggered the command.
After searching on the forum i got to know that all sendAndWait does is wait until the handling of the command and publication of the events is finalized, and then i tired with Reactor Extension as well using below but got same results: org.axonframework.extensions.reactor.commandhandling.gateway.ReactorCommandGateway.send(new CreateApplicationCommand()).block();
Can someone please help me out.
Thanks in advance.
What would be best in your situation, #rohit, is to embrace the fact you are using an eventually consistent solution here. Thus, Command Handling is entirely separate from Event Handling, making the Query Models you create eventually consistent with the Command Model (your aggregates). Therefore, you wouldn't necessarily wait for the events exactly but react when the Query Model is present.
Embracing this comes down to building your application such that "yeah, I know my response might not be up to date now, but it might be somewhere in the near future." It is thus recommended to subscribe to the result you are interested in after or before the fact you have dispatched a command.
For example, you could see this as using WebSockets with the STOMP protocol, or you could tap into Project Reactor and use the Flux result type to receive the results as they go.
From your description, I assume you or your business have decided that the UI component should react in the (old-fashioned) synchronous way. There's nothing wrong with that, but it will bite your *ss when it comes to using something inherently eventually consistent like CQRS. You can, however, spoof the fact you are synchronous in your front-end, if you will.
To achieve this, I would recommend using Axon's Subscription Query to subscribe to the query model you know will be updated by the command you will send.
In pseudo-code, that would look a little bit like this:
public Result mySynchronousCall(String identifier) {
// Subscribe to the updates to come
SubscriptionQueryResult<Result> result = QueryGateway.subscriptionQuery(...);
// Issue command to update
CommandGateway.send(...);
// Wait on the Flux for the first result, and then close it
return result.updates()
.next()
.map(...)
.timeout(...)
.doFinally(it -> result.close());
}
You could see this being done in this sample WebFluxRest class, by the way.
Note that you are essentially closing the door to the front-end to tap into the asynchronous goodness by doing this. It'll work and allow you to wait for the result to be there as soon as it is there, but you'll lose some flexibility.

How to pause a specific kafka consumer thread when concurrency is set to more than 1?

I am using spring-kafka 2.2.8 and setting concurrency to 2 as shown below and trying to understand how do i pause an consumer thread/instance when particular condition is met.
#KafkaListener(id = "myConsumerId", topics = "myTopic", concurrency=2)
public void listen(String in) {
System.out.println(in);
}
Now, I've two questions.
Would my consumer span two different poll threads to poll the records?
If i'm setting an id to the consumer as shown above. How can i pause a specific consumer thread (with concurrency set to more than 1).
Please suggest.
Use the KafkaListenerEndpointRegistry.getListenerContainer(id) method to get a reference to the container.
Cast it to a ConcurrentMessageListenerContainer and call getContainers() to get a list of the child KafkaMessageListenerContainers; you can then pause/resume them individually.
You can determine which topics/partitions each one has using getAssignedPartitions().

Is code run from Platform.runLater thread safe?

If I have code ran entirely from within Platform.runLater, is that code automatically thread safe?
My understanding is that code ran on Platform.runLater is ran on the JavaFX application thread, which there is only one.
For example if I manipulate an hash map entirely in Plaform.runLater, I don't have to worry about multiple threads, right?
Whether or not using Platform#runLater(Runnable) is thread-safe is entirely dependent on how you use it. The example you give is you have a Map visible from a background thread but only ever manipulate it on the JavaFX Application Thread via runLater. Maybe something like:
// executing on background thread
Object newKey = ...;
Object newVal = ...;
Platform.runLater(() -> map.put(newKey, newVal));
This makes the Map "thread-safe" only from the point-of-view of the JavaFX Application Thread. If the background thread later attempts to read the Map (e.g. map.get(newKey)) there is no guarantee said thread will see the new entry. In other words, it may read null because the entry "doesn't exist" or it may read an old value if one was already present. You could of course fix this by reading on the JavaFX Application Thread as well:
Object val = CompletableFuture.supplyAsync(() -> map.get(key), Platform::runLater).join();
Or even by waiting for the JavaFX Application Thread to finish writing to the Map:
CountDownLatch latch = new CountDownLatch(1);
Platform.runLater(() -> {
// write to map
latch.countDown();
});
latch.await();
// writes to map will be visible to background thread from here
That said, actions by the background thread that happened before the call to runLater will be visible to the JavaFX Application Thread. In other words, a happens-before relationship is created. When scheduling the Runnable to execute on the JavaFX Application Thread eventually some inter-thread communication must occur, which in turn requires synchronization in order to be thread-safe. Looking at the Windows implementation of JavaFX I can't say for certain what this synchronization looks like, however, because it appears to invoke a native method.

Background URLSession on watchOS - what is the cycle?

I have a class with the delegates for a URLSession. I intend to use it with a background configuration. I understand that the handlers are called when a certain event happens, such as didFinishDownloadingTo.
However, I do have the handle function on my ExtensionDelegate class:
func handle( _ handleBackgroundTasks:
Set<WKRefreshBackgroundTask>)
// Sent when the system needs to launch the application in the background
to process tasks. Tasks arrive in a set, so loop through and process each one.
for task in handleBackgroundTasks {
switch task {
case let urlSessionTask as WKURLSessionRefreshBackgroundTask:
I wonder: where should I handle the data I receive after a download? At the didFinishDownloadingTo or at that function on my ExtensionDelegate class, on the appropriate case of the switch statement?
Another question on the same cycle: I read everywhere that one must remember to setTaskCompleted() after going through the background tasks. But I read elsewhere that one should not set a task as completed if the scheduled data transfer hasn't finished. How do I check that?
There is a very good explanation here.enter link description here
It worked when I had an array with my WKURLSessionRefreshBackgroundTask. Then, at the end of my didFinishDownloadingTo, I get the task on that array that has the same sessionIdentifier as the current session.configuration.identifier, and set it as complete.

Resources