In my Singleton-EJB i start a TimerService every 2 minutes. When a client access the test method
sometimes the application runs into a deadlock. The problem is, the test method calls a asynchronous method inside the EJB (see Method determineABC). The deadlock happens when the scheduleMethod tries to create a single action timer and therefore tries to acquire a lock (because hte timer callback method is annotated with LOCK.WRITE). At the same time we are already in the determineABC Method which tries to invoke the asynchronous method asynchMethod. Maybe the call of ejbLocal.asynchMethod(...); also tries to acquire a lock. Anyway here i run into a deadlock, because the asynchronous method is never called. So what is the problem?
Here is a source code snippet:
#Singleton
#Startup
#TransactionManagement(TransactionManagementType.CONTAINER)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
public class XEJB implements XEJBLocal {
#javax.annotation.Resource(name = "x/XEJB/TimeService")
private TimerService timerService;
#javax.annotation.Resource
private SessionContext ctx;
#Schedule(minute = "*/2", hour = "*", persistent = false)
#Lock(LockType.READ)
private void scheduleMethod() {
// Create Single Action Timer
timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false));
}
#Timeout
#Lock(LockType.WRITE)
private void timer(Timer timer) {
// Do something
}
#Override
#Lock(LockType.READ)
public B test(...) {
return determineABC(...);
}
#Lock(LockType.READ)
private B determineABC(...) {
XEJBLocal ejb= (XEJBLocal) ctx.getBusinessObject(ctx.getInvokedBusinessInterface());
Future<ArrayList> result = null;
result = ejb.asynchMethod(...);
result.get(4, TimeUnit.MINUTES); // Sometimes runs into a DEADLOCK
...
}
#Asynchronous
#Override
#Lock(LockType.READ)
public Future<ArrayList> asynchMethod(...) {
...
return new AsyncResult<ArrayList>(abcList);
}
The Deadlock also happens when i only use the #Schedule Method and no TimerService...
The DeadLock also happens when i do not use a Future Object but void as return type of the asynchronous Method.
When the timeout Exception is thrown the deadlock is solved. When i annotate the timer method with #AccessTimeout(2000) and this time is up the asynchronous method is called and therefore the deadlock is also solved.
When i use Locktype.READ for the timer Method no Deadlock happens. But why? What does the asychronous method call?
READ locks have to wait for WRITE locks to finish before they start their work. When timer() is working all your other invokations, even to READ methods, are going to wait. Are you sure the timeout happens in result.get(4, TimeUnit.MINUTES);?
I think you may be have access timeouts in test() invokation, way before reaching result.get(4, TimeUnit.MINUTES);.
Related
I want to switch from firing CDI beans synchronous to asynchronous to be able to work stuff parallel.
event.fire(myObject) -> event.fireAsync(myObject)
As I currently use the request context to know what tenant the current process is about, I am confronted with the problem, that the #RequestScoped context is lost in a #ObservesAsync method. Therefor I don't know anymore to what db to persist etc. I could provide the necessary information in the cdi event object and recreate the requestcontext manually after recieving, but this would bloat my object and clutter my code.
Is there a way to simply keep the request context for a async cdi event?
Request scoped objects are not required to be thread-safe and usually are not. For that reason, request context is never automatically propagated across threads. For asynchronous events, indeed you should put all the necessary data into the event object.
You are of course not the first person to ask about this. There's been attempts to define an API/SPI for context propagation (MicroProfile Context Propagation, Jakarta Concurrency), including CDI request context, but they only work correctly in case of sequential processing with thread jumps (common in non-blocking/reactive programming). If you try to [ab]use context propagation for concurrent processing, you're signing up for troubles. For the latest discussion about this, see https://github.com/jakartaee/cdi/issues/474
I actually switched to using interfaces. This gives me more control and makes the code more understandable:
abstract class Publisher<T>{
#All
#Inject
private List<EventConsumer<T>> eventConsumers;
#Inject
private ContextInfo contextInfo;
#Inject
private MutableContextInfo mutableContextInfo;
...
public void publishEvent(T event){
String myContextInfo= contextInfo.getMyContextInfo();
eventConsumers.forEach(consumer -> notifyAsync(consumer, receivedObject, myContextInfo))
}
private void notifyAsync(EventConsumer<T> consumer, T object, String myContextInfo) {
Uni.createFrom()
.voidItem()
.subscribeAsCompletionStage()
.thenAccept(voidItem -> notifyConsumer(consumer, object, myContextInfo));
}
/**
* Method needs to be public to be able to activate request context on self invocation
*/
#ActivateRequestContext
public void notifyConsumer(EventConsumer<T> consumer, T object, String myContextInfo) {
mutableContextInfo.setMyContextInfo(myContextInfo);
try {
consumer.onEvent(object);
} catch (RuntimeException ex) {
log.error("Error while promoting object to eventconsumer", ex);
}
}
}
I could not find a definitive answer to whether it is safe to spawn threads within session-scoped JSF managed beans. The thread needs to call methods on the stateless EJB instance (that was dependency-injected to the managed bean).
The background is that we have a report that takes a long time to generate. This caused the HTTP request to time-out due to server settings we can't change. So the idea is to start a new thread and let it generate the report and to temporarily store it. In the meantime the JSF page shows a progress bar, polls the managed bean till the generation is complete and then makes a second request to download the stored report. This seems to work, but I would like to be sure what I'm doing is not a hack.
Check out EJB 3.1 #Asynchronous methods. This is exactly what they are for.
Small example that uses OpenEJB 4.0.0-SNAPSHOTs. Here we have a #Singleton bean with one method marked #Asynchronous. Every time that method is invoked by anyone, in this case your JSF managed bean, it will immediately return regardless of how long the method actually takes.
#Singleton
public class JobProcessor {
#Asynchronous
#Lock(READ)
#AccessTimeout(-1)
public Future<String> addJob(String jobName) {
// Pretend this job takes a while
doSomeHeavyLifting();
// Return our result
return new AsyncResult<String>(jobName);
}
private void doSomeHeavyLifting() {
try {
Thread.sleep(SECONDS.toMillis(10));
} catch (InterruptedException e) {
Thread.interrupted();
throw new IllegalStateException(e);
}
}
}
Here's a little testcase that invokes that #Asynchronous method several times in a row.
Each invocation returns a Future object that essentially starts out empty and will later have its value filled in by the container when the related method call actually completes.
import javax.ejb.embeddable.EJBContainer;
import javax.naming.Context;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
public class JobProcessorTest extends TestCase {
public void test() throws Exception {
final Context context = EJBContainer.createEJBContainer().getContext();
final JobProcessor processor = (JobProcessor) context.lookup("java:global/async-methods/JobProcessor");
final long start = System.nanoTime();
// Queue up a bunch of work
final Future<String> red = processor.addJob("red");
final Future<String> orange = processor.addJob("orange");
final Future<String> yellow = processor.addJob("yellow");
final Future<String> green = processor.addJob("green");
final Future<String> blue = processor.addJob("blue");
final Future<String> violet = processor.addJob("violet");
// Wait for the result -- 1 minute worth of work
assertEquals("blue", blue.get());
assertEquals("orange", orange.get());
assertEquals("green", green.get());
assertEquals("red", red.get());
assertEquals("yellow", yellow.get());
assertEquals("violet", violet.get());
// How long did it take?
final long total = TimeUnit.NANOSECONDS.toSeconds(System.nanoTime() - start);
// Execution should be around 9 - 21 seconds
assertTrue("" + total, total > 9);
assertTrue("" + total, total < 21);
}
}
Example source code
Under the covers what makes this work is:
The JobProcessor the caller sees is not actually an instance of JobProcessor. Rather it's a subclass or proxy that has all the methods overridden. Methods that are supposed to be asynchronous are handled differently.
Calls to an asynchronous method simply result in a Runnable being created that wraps the method and parameters you gave. This runnable is given to an Executor which is simply a work queue attached to a thread pool.
After adding the work to the queue, the proxied version of the method returns an implementation of Future that is linked to the Runnable which is now waiting on the queue.
When the Runnable finally executes the method on the real JobProcessor instance, it will take the return value and set it into the Future making it available to the caller.
Important to note that the AsyncResult object the JobProcessor returns is not the same Future object the caller is holding. It would have been neat if the real JobProcessor could just return String and the caller's version of JobProcessor could return Future<String>, but we didn't see any way to do that without adding more complexity. So the AsyncResult is a simple wrapper object. The container will pull the String out, throw the AsyncResult away, then put the String in the real Future that the caller is holding.
To get progress along the way, simply pass a thread-safe object like AtomicInteger to the #Asynchronous method and have the bean code periodically update it with the percent complete.
Introduction
Spawning threads from within a session scoped managed bean is not necessarily a hack as long as it does the job you want. But spawning threads at its own needs to be done with extreme care. The code should not be written that way that a single user can for example spawn an unlimited amount of threads per session and/or that the threads continue running even after the session get destroyed. It would blow up your application sooner or later.
The code needs to be written that way that you can ensure that an user can for example never spawn more than one background thread per session and that the thread is guaranteed to get interrupted whenever the session get destroyed. For multiple tasks within a session you need to queue the tasks.
Also, all those threads should preferably be served by a common thread pool so that you can put a limit on the total amount of spawned threads at application level.
Managing threads is thus a very delicate task. That's why you'd better use the built-in facilities rather than homegrowing your own with new Thread() and friends. The average Java EE application server offers a container managed thread pool which you can utilize via among others EJB's #Asynchronous and #Schedule. To be container independent (read: Tomcat-friendly), you can also use the Java 1.5's Util Concurrent ExecutorService and ScheduledExecutorService for this.
Below examples assume Java EE 6+ with EJB.
Fire and forget a task on form submit
#Named
#RequestScoped // Or #ViewScoped
public class Bean {
#EJB
private SomeService someService;
public void submit() {
someService.asyncTask();
// ... (this code will immediately continue without waiting)
}
}
#Stateless
public class SomeService {
#Asynchronous
public void asyncTask() {
// ...
}
}
Asynchronously fetch the model on page load
#Named
#RequestScoped // Or #ViewScoped
public class Bean {
private Future<List<Entity>> asyncEntities;
#EJB
private EntityService entityService;
#PostConstruct
public void init() {
asyncEntities = entityService.asyncList();
// ... (this code will immediately continue without waiting)
}
public List<Entity> getEntities() {
try {
return asyncEntities.get();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new FacesException(e);
} catch (ExecutionException e) {
throw new FacesException(e);
}
}
}
#Stateless
public class EntityService {
#PersistenceContext
private EntityManager entityManager;
#Asynchronous
public Future<List<Entity>> asyncList() {
List<Entity> entities = entityManager
.createQuery("SELECT e FROM Entity e", Entity.class)
.getResultList();
return new AsyncResult<>(entities);
}
}
In case you're using JSF utility library OmniFaces, this could be done even faster if you annotate the managed bean with #Eager.
Schedule background jobs on application start
#Singleton
public class BackgroundJobManager {
#Schedule(hour="0", minute="0", second="0", persistent=false)
public void someDailyJob() {
// ... (runs every start of day)
}
#Schedule(hour="*/1", minute="0", second="0", persistent=false)
public void someHourlyJob() {
// ... (runs every hour of day)
}
#Schedule(hour="*", minute="*/15", second="0", persistent=false)
public void someQuarterlyJob() {
// ... (runs every 15th minute of hour)
}
#Schedule(hour="*", minute="*", second="*/30", persistent=false)
public void someHalfminutelyJob() {
// ... (runs every 30th second of minute)
}
}
Continuously update application wide model in background
#Named
#RequestScoped // Or #ViewScoped
public class Bean {
#EJB
private SomeTop100Manager someTop100Manager;
public List<Some> getSomeTop100() {
return someTop100Manager.list();
}
}
#Singleton
#ConcurrencyManagement(BEAN)
public class SomeTop100Manager {
#PersistenceContext
private EntityManager entityManager;
private List<Some> top100;
#PostConstruct
#Schedule(hour="*", minute="*/1", second="0", persistent=false)
public void load() {
top100 = entityManager
.createNamedQuery("Some.top100", Some.class)
.getResultList();
}
public List<Some> list() {
return top100;
}
}
See also:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
I tried this and works great from my JSF managed bean
ExecutorService executor = Executors.newFixedThreadPool(1);
#EJB
private IMaterialSvc materialSvc;
private void updateMaterial(Material material, String status, Location position) {
executor.execute(new Runnable() {
public void run() {
synchronized (position) {
// TODO update material in audit? do we need materials in audit?
int index = position.getMaterials().indexOf(material);
Material m = materialSvc.getById(material.getId());
m.setStatus(status);
m = materialSvc.update(m);
if (index != -1) {
position.getMaterials().set(index, m);
}
}
}
});
}
#PreDestroy
public void destory() {
executor.shutdown();
}
How do I run an async task in a Kestrel process with a very long time interval (say daily or perhaps even longer)? The task needs to run in the memory space of the web server process to update some global variables that slowly go out of date.
Bad answers:
Trying to use an OS scheduler is a poor plan.
Calling await from a controller is not acceptable. The task is slow.
The delay is too long for Task.Delay() (about 16 hours or so and Task.Delay will throw).
HangFire, etc. make no sense here. It's an in-memory job that doesn't care about anything in the database. Also, we can't call the database without a user context (from a logged-in user hitting some controller) anyway.
System.Threading.Timer. It's reentrant.
Bonus:
The task is idempotent. Old runs are completely irrelevant.
It doesn't matter if a particular page render misses the change; the next one will get it soon enough.
As this is a Kestrel server we're not really worried about stopping the background task. It'll stop when the server process goes down anyway.
The task should run once immediately on startup. This should make coordination easier.
Some people are missing this. The method is async. If it wasn't async the problem wouldn't be difficult.
I am going to add an answer to this, because this is the only logical way to accomplish such a thing in ASP.NET Core: an IHostedService implementation.
This is a non-reentrant timer background service that implements IHostedService.
public sealed class MyTimedBackgroundService : IHostedService
{
private const int TimerInterval = 5000; // change this to 24*60*60 to fire off every 24 hours
private Timer _t;
public async Task StartAsync(CancellationToken cancellationToken)
{
// Requirement: "fire" timer method immediatly.
await OnTimerFiredAsync();
// set up a timer to be non-reentrant, fire in 5 seconds
_t = new Timer(async _ => await OnTimerFiredAsync(),
null, TimerInterval, Timeout.Infinite);
}
public Task StopAsync(CancellationToken cancellationToken)
{
_t?.Dispose();
return Task.CompletedTask;
}
private async Task OnTimerFiredAsync()
{
try
{
// do your work here
Debug.WriteLine($"{TimerInterval / 1000} second tick. Simulating heavy I/O bound work");
await Task.Delay(2000);
}
finally
{
// set timer to fire off again
_t?.Change(TimerInterval, Timeout.Infinite);
}
}
}
So, I know we discussed this in comments, but System.Threading.Timer callback method is considered a Event Handler. It is perfectly acceptable to use async void in this case since an exception escaping the method will be raised on a thread pool thread, just the same as if the method was synchronous. You probably should throw a catch in there anyway to log any exceptions.
You brought up timers not being safe at some interval boundary. I looked high and low for that information and could not find it. I have used timers on 24 hour intervals, 2 day intervals, 2 week intervals... I have never had them fail. I have a lot of them running in ASP.NET Core in production servers for years, too. We would have seen it happen by now.
OK, so you still don't trust System.Threading.Timer...
Let's say that, no... There is just no fricken way you are going to use a timer. OK, that's fine... Let's go another route. Let's move from IHostedService to BackgroundService (which is an implementation of IHostedService) and simply count down.
This will alleviate any fears of the timer boundary, and you don't have to worry about async void event handlers. This is also a non-reentrant for free.
public sealed class MyTimedBackgroundService : BackgroundService
{
private const long TimerIntervalSeconds = 5; // change this to 24*60 to fire off every 24 hours
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
// Requirement: "fire" timer method immediatly.
await OnTimerFiredAsync(stoppingToken);
var countdown = TimerIntervalSeconds;
while (!stoppingToken.IsCancellationRequested)
{
if (countdown-- <= 0)
{
try
{
await OnTimerFiredAsync(stoppingToken);
}
catch(Exception ex)
{
// TODO: log exception
}
finally
{
countdown = TimerIntervalSeconds;
}
}
await Task.Delay(1000, stoppingToken);
}
}
private async Task OnTimerFiredAsync(CancellationToken stoppingToken)
{
// do your work here
Debug.WriteLine($"{TimerIntervalSeconds} second tick. Simulating heavy I/O bound work");
await Task.Delay(2000);
}
}
A bonus side-effect is you can use long as your interval, allowing you more than 25 days for the event to fire as opposed to Timer which is capped at 25 days.
You would inject either of these as so:
services.AddHostedService<MyTimedBackgroundService>();
I am trying to run heavy tasks asynchronously. The client then polls the server to know when the job is done. This seemed to work, but I noticed that my WebService that responds to the polling is blocked when I put a breakpoint in my #Asynchronous Method.
This is what I did:
JobWS.java // Used to start a job
#RequestScoped
#Path("/job")
#Produces(MediaType.APPLICATION_JSON)
public class JobWS {
#POST
#Path("/run/create")
public Response startJob(MyDTO dto) {
return ResponseUtil.ok(jobService.createJob(dto));
}
}
JobService.java // Creates the job in the DB, starts it and returns its ID
#Stateless
public class JobService {
#Inject
private AsyncJobService asyncJobService;
#Inject
private Worker worker;
public AsyncJob createJob(MyDTO dto) {
AsyncJob asyncJob = asyncJobService.create();
worker.doWork(asyncJob.getId(), dto);
return asyncJob; // With this, the client can poll the job with its ID
}
}
Worker.java // Working hard
#Stateless
public class Worker {
#Asynchronous
public void doWork(UUID asyncJobId, MyDTO dto) {
// Do work
// ...
// Eventually update the AsyncJob and mark it as finished
}
}
Finally, my Polling Webservice, which is the one being blocked
#RequestScoped
#Path("/polling")
#Produces(MediaType.APPLICATION_JSON)
public class PollingWS {
#Inject
AsyncJobService asyncJobService;
#GET
#Path("/{id}")
public Response loadAsyncJob(#PathParam("id") #NotNull UUID id) {
return ResponseUtil.ok(asyncJobService.loadAsyncJob(id));
}
}
If I put a breakpoint somwhere in doWork(), the PollingWS does not respond to HTTP requests anymore. When I debug through doWork(), occasionally I get a response, but only when jumping from one breakpoint to another, never when waiting at a breakpoint.
What am I missing here ? Why is my doWork() method blocking my Webservice, despite it running asynchronously ?
I found the culprit. A breakpoint suspends all threads by default. In IntelliJ, a right click on it will open the following dialog:
When changing the "Suspend" property to "Thread", my WS is not blocked anymore and everything works as expected. In retrospect, I feel a bit stupid for asking this. But hey... maybe it will help others :)
I m sending several retrofit calls via SyncAdapter onPerformSync and I m trying to regulate http calls by sending out via a try/catch sleep statement. However, this is blocking the UI and will be not responsive only after all calls are done.
What is a better way to regulate network calls (with a sleep timer) in background in onPerformSync without blocking UI?
#Override
public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) {
String baseUrl = BuildConfig.API_BASE_URL;
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(baseUrl)
.addConverterFactory(GsonConverterFactory.create())
.build();
service = retrofit.create(HTTPService.class);
Call<RetroFitModel> RetroFitModelCall = service.getRetroFit(apiKey, sortOrder);
RetroFitModelCall.enqueue(new Callback<RetroFitModel>() {
#Override
public void onResponse(Response<RetroFitModel> response) {
if (!response.isSuccess()) {
} else {
List<RetroFitResult> retrofitResultList = response.body().getResults();
Utility.storeList(getContext(), retrofitResultList);
for (final RetroFitResult result : retrofitResultList) {
RetroFitReview(result.getId(), service);
try {
// Sleep for SLEEP_TIME before running RetroFitReports & RetroFitTime
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
}
RetroFitReports(result.getId(), service);
RetroFitTime(result.getId(), service);
}
}
}
#Override
public void onFailure(Throwable t) {
Log.e(LOG_TAG, "Error: " + t.getMessage());
}
});
}
}
The "onPerformSync" code is executed within the "SyncAdapterThread" thread, not within the Main UI thread. However this could change when making asynchronous calls with callbacks (which is our case here).
Here you are using an asynchronous call of the Retrofit "call.enqueue" method, and this has an impact on thread execution. The question we need to ask at this point:
Where callback methods are going to be executed?
To get the answer to this question, we have to determine which Looper is going to be used by the Handler that will post callbacks.
In case we are playing with handlers ourselves, we can define the looper, the handler and how to process messages/runnables between handlers. But this time it is different because we are using a third party framework (Retrofit). So we have to know which looper used by Retrofit?
Please note that if Retrofit didn't already define his looper, you
could have caught an exception saying that you need a looper to
process callbacks. In other words, an asynchronous call needs to be in
a looper thread in order to post callbacks back to the thread from
where it was executed.
According to the code source of Retrofit (Platform.java):
static class Android extends Platform {
#Override CallAdapter.Factory defaultCallAdapterFactory(Executor callbackExecutor) {
if (callbackExecutor == null) {
callbackExecutor = new MainThreadExecutor();
}
return new ExecutorCallAdapterFactory(callbackExecutor);
}
static class MainThreadExecutor implements Executor {
private final Handler handler = new Handler(Looper.getMainLooper());
#Override public void execute(Runnable r) {
handler.post(r);
}
}
}
You can notice "Looper.getMainLooper()", which means that Retrofit will post messages/runnables into the main thread message queue (you can do research on this for further detailed explanation). Thus the posted message/runnable will be handled by the main thread.
So that being said, the onResponse/onFailure callbacks will be executed in the main thread. And it's going to block the UI, if you are doing too much work (Thread.sleep(SLEEP_TIME);). You can check it by yourself: just make a breakpoint in "onResponse" callback and check in which thread it is running.
So how to handle this situation? (the answer to your question about Retrofit use)
Since we are already in a background thread (SyncAdapterThread), so there is no need to make asynchronous calls in your case. Just make a Retrofit synchronous call and then process the result, or log a failure. This way, you will not block the UI.