We have JavaScript to Java callbacks that we handle and we noticed that sometimes, that starts to happen on a new thread. (After some amount of time/sleeping computer).
Is there anyway to make it consistently use the same thread for JavaScript to Java callbacks?
JXBrowser 6.9
This is a designed behavior in JxBrowser. The idea to use different threads during processing JavaScript-Java callbacks is to support reentrancy (JavaScript sync calls Java -> Java sync calls JavaScript) and avoid deadlocks. There's no way to force JxBrowser to use a single thread in this case.
Related
This is related to the previous question WebAssembly in async code
Basically, that question is about the problem of the WebAssembly blocking the main thread, and the answer to the question is to move the WebAssembly code to a web worker. That works.
The problem now is that the WebAssembly blocks the onmessage() on the worker.
My long running WebAssembly code has functions like play(), pause(), stop(), etc. The play() checks a pause flag and a stop flag periodically to determine if the play() should return. The pause() and the stop() are used to set those flags.
The JavaScript main thread calls postMessage() to send a message to the worker, which further calls the play().
Since the onmessage() is blocked, the worker will have no chance to receive further messages to do pause() or stop() until the play() is completed. That will defeat the very purposes of the pause/stop.
It seems the simple use case of play/pause/stop cannot be supported by the WebAssembly.
Any comments or suggestions?
By the way, that use case is well supported by the defunct Google PNaCl.
Thanks.
In short: Web worksers do not ignore messages even if the web worker thread is blocked.
All browsers events, including web worker postMessage()/onmessage() events are queued. This is the fundamental philosophy of JavaScript (onmessage() is done in JS even if you use WebAssembly). Have a look at "Concurrency model and Event Loop" from MDN for further detail.
So what going to happen in your case is, while onmessage() is blocked, the events from main thread postMessage() are queued automatically. When a single onmessage() job is finished in the worker thread, from the worker event queue, will check if postMessage() is called before it finishes and catch the message if there is. So you don't need to worry about that case as long as the onmessage() job takes like 10 seconds and the you get hundreds of events in the queue.
This is how asynchronous execution is done everywhere in the browser.
Considering you are targeting recent browsers (WebAssembly), you can most likely rely on SharedArrayBuffer and Atomics. Have a look at these solutions Is it possible to pause/resume a web worker externally? , which in your case will need to be handled inside WebAssembly (Atomics.wait part)
I'm writing web services in C++/CLI (not my choice) using Microsoft's Web API. A lot of functions in Web API are async, but because I'm using C++/CLI, I don't get the async/await support of C# or VB. So the fallback position is to use ContinueWith() to schedule a continuation delegate for reading the async task's result safely.
However, because C++/CLI also doesn't support inline anonymous delegates or managed lambdas, every delegate continuation must be written as a separate function somewhere. That quickly turns into spaghetti with the number of async functions in Web API.
So, to avoid the deadlock issues of Task<T>::Result, I've been trying this:
[HttpGet, Route( "get/some/dto" )]
Task< SomeDTO ^ > ^ MyActionMethod()
{
return Task::Run( gcnew Func< SomeDTO ^ >( this, &MyController::MyActionMethod2 ) );
}
SomeDTO ^ MyActionMethod2()
{
// execute code and use any task->Result calls I need without deadlocking
}
Okay, so I know this isn't great, but how bad is it? I don't yet understand enough of the guts of Web API or ASP.NET to comprehend the performance or scaling ramifications this will have.
Also, what other consequences may this have that aren't necessarily related to performance? For example, exceptions get wrapped in an extra AggregateException, which represents additional complexity and work for handling exceptions.
Your memory usage will increase with your application's parallelism. For every concurrent call to MyActionMethod you will need a separate thread with its own stack. That will cost you about 1 MB of RAM for each concurrent call. If MyActionMethod runs long enough so that 10000 instances run at once, you're looking at 10 GB of RAM. There is also CPU overhead in setting up each thread.
If concurrency is low, dropping async support won't be a problem. In that case, don't bother with Task::Run. Just change MyActionMethod to return SomeDTO^ (no Task wrapper).
Another potential concern is that lose easy use of cancellation tokens. However, for Web API it's usually fine to just let an exception propagate back to Web API, which ends up cancelling the synchronous call anyway.
Finally, if you were planning on performing any operation within your action method in parallel, you'll still need to use ContinueWith to accomplish that. Going non-async by default means you'll always perform one operation at a time. Fortunately, it's often just fine to do so.
Okay, so I know this isn't great, but how bad is it?
It's difficult to answer this without load-testing your specific scenario. But you can walk through the known semantics (taken largely from my blog).
First, when a request comes in, ASP.NET executes your handler on a thread pool thread within that request context. Your request handler calls Task.Run, which takes another thread from the thread pool and executes the actual request logic on it. The handler then returns the task returned from Task.Run; this releases the original request thread back to the thread pool.
Then, the Task.Run delegate will block on any asynchronous parts. So, this pattern has the scaling disadvantages of a regular synchronous handler, plus an extra thread context switch. Also, it uses a thread from the ASP.NET thread pool, which is not necessarily a bad thing, but in some scenarios it may throw off the ASP.NET thread pool heuristics.
Also, what other consequences may this have that aren't necessarily related to performance? For example, exceptions get wrapped in an extra AggregateException, which represents additional complexity and work for handling exceptions.
Yes, the exceptions from any .Result or Wait() calls will be wrapped in AggregateException. You may be able to avoid this by calling .GetAwaiter().GetResult() instead.
Another important consideration is that the code executing within the Task.Run is executing without a request context. So, ambient data like HttpContext.Current, current culture, thread principal, etc. are not going to be set correctly. You'll have to capture any important data before calling Task.Run and pass it down manually.
I've joined a project that uses Jax-RS (and originally there was quite a bit of Spring-based Controller code in there too, but all URL handlers use Jax-RS now). Now we want to be able to fill in a queue of tasks that should be run with a small delay between each of them. The delay can be specified in ms. I've avoided Thread.sleep, as I've heard you should not manage threads manually in Java EE. Before I came in there was already a busy wait loop implemented.
I would like to switch this to an asynchronous background task. I could of course let the client poll the server with the given delay, and just have an AsyncResponse that can be resumed. But can the same AsyncResponse be resumed/suspended multiple times? The resource does have state, so it would be possible to drop the asynchrony completely and just do client polling to handle all of it.
A lot of example code for showing off asynchronous tasks use Thread.sleep. How bad is it to do this in a background task on an ExecutorService or something similar?
The point of the delay is to simulate human interaction, and post a long list of JMS messages to a queue but ensure that two listeners don't pick up and handle messages that depend on one another.
Is it easier/better to handle this on the client side rather than the server side? Writing some JavaScript that handles all the polling would be quite simple, so if this seems like a bad idea for handling on the server side, it's not that big a deal.
The tool is only going to be used by a single user, as it's a developer testing tool. Therefore we went for solving this on the client side, pushing the messages onto the queue through AJAX calls. This works fine for our purposes, but if anyone has a solution that might help someone else. Feel free to drop a new answer.
Kind of an open question that I run into once in a while -- if you have an EJB stateful or stateless bean, or possibly a direct servlet process, that may with the wrong parameters start running long on a production system, how could you effectively add in a manual 'kill switch' for an administrator/person to specifically kill that thread/process?
You can't, or at least you shouldn't, interfere with application server threads directly. So a "kill switch" look definitively inappropriate to me in a Java EE environment.
I do however understand the problem you have, but would rather suggest to take an asynchronous approach where you split you job in smaller work unit.
I did that using EJB Timers and was happy with the result: An initial timer is created for the first work unit. When the app. server executes the timer, it then register as second one that correspond to the 2nd work unit, etc. Information can be passed form one work unit to the other because EJB Timers support the storage of custom information. Also, timer execution and registration is transactional, which is fine to work with database. You can even shutdown and restart the application sever with this approach. Before each work unit ran, we checked in database if the job had been canceled in the meantime.
Recently, the book on threading for Winforms application (Concurrent programming on Windows by Joe Duffy) was released. This book, focused on winforms, is 1000 pages.
What gotchas are there in ASP.NET threading? I'm sure there are plenty of gotchas to be aware of when implementing threading in ASP.NET. What should I be aware of?
Thanks
Since each http request received by IIS is processed separately, on it's own thread anyway, the only issues you should have is if you kick off some long running process from within the scope of a single http request. In that case, I would put such code into a separate referenced dependant assembly, coded like a middle-tier component, with no dependance or coupling to the ASP.Net model at all, and handle whatever concurrency issues arose within that assembly separately, without worrying about the ASP.Net model at all...
Jeff Richter over at Wintellect has a library called PowerThreading. It is very useful if you are developing applications on .NET. => Power Threading Library
Check for his presentations online at various events.
Usually you are encouraged to use the thread pool in .Net because it of the many benefits of having things managed on your behalf.....but NOT in ASP.net.
Since ASP.net is already multi-threaded, it uses the thread pool to serve requests that are mapped to the ASP.net ISAPI filter, and since the thread pool is fixed in size, by using it you are basically taking threads away that are set aside to do the job of handling request.
In small, low-traffic websites, this is not an issue, but in larger, high-traffic websites you end up competing for and consuming threads that the ASP.net process relies on.
If you want to use threading, it is fine to do something like....
Thread thread = new Thread(threadStarter);
thread.IsBackground = true;
thread.Start();
but with a warning: be sure that the IsBackground is set to true because if it isn't the thread exists in the foreground and will likely prevent the IIS worker process from recycling or restarting.
First, are you talking about asynchronous ASP.NET? Or using the ThreadPool/spinning up your own threads?
If you aren't talking about asynchronous ASP.NET, the main question to answer is: what work would you be doing in the other threads and would the work be specific to a request/response cycle, or is it more about processing global tasks in the background?
EDIT
If you need to handle concurrent operations (a better term than multi-threaded IMO) for a given request/response cycle, then use the asynchronous features of ASP.NET. These provide an abstraction over IIS's support for concurrency, allowing the server to process other requests while the current request is waiting for work to complete.
For background processing of global tasks, I would not use ASP.NET at all. You should assume that IIS will recycle your AppPool at a random point in time. You also should not assume that IIS will run your AppPool on any sort of schedule. Any important background processing should be done outside of IIS, either as a scheduled task or a Windows Service. The approach I usually take is to have a Windows Service and a shared work-queue where the web-site can post work items. The queue can be a database table, a reliable message-based queue (MSMQ, etc), files on the file system, etc.
The immediate thing that comes to mind is, why would you "implement threading" in ASP.NET.
You do need to be conscious all the time that ASP.NET is multi-threaded since many requests can be processed simulatenously each in its own thread. So for example use of static fields needs to take threading into account.
However its rare that you would want to spin up a new thread in code yourself.
As far as the usual winforms issues with threading in the UI is concerned these issues are not present in ASP.NET. There is no window based message pump to worry about.
It is possible to create asynchronous pages in ASP.NET. These will perform all steps up to a certain point. These steps will include asynchronously fetching data, for instance. When all the asynchronous tasks have completed, the remainder of the page lifecycle will execute. In the meantime, a worker thread was not tied up waiting for database I/O to complete.
In this model, all extra threads are executing while the request, and the page instance, and all the controls, still exist. You have to be careful when starting your own threads, that, by the time the thread executes, it's possible that the request, page instance, and controls will have been Disposed.
Also, as usual, be certain that multiple threads will actually improve performance. Often, additional threads will make things worse.
The gotchas are pretty much the same as in any multithreaded application.
The classes involved in processing a request (Page, Controls, HttpContext.Current, ...) are specific to that request so don't need any special handling.
Similarly for any classes you instantiate as local variables or fields within these classes, and for access to Session.
But, as usual, you need to synchronize access to shared resources such as:
Static (C#) / Shared(VB.NET) references.
Singletons
External resources such as the file system
... etc...
I've seen threading bugs too often in ASP.NET apps, e.g. a singleton being used by multiple concurrent requests without synchronization, resulting in user A seeing user B's data.