CefSharp.BrowserSubprocess is left running with high CPU if app crashes - cefsharp

I'm making use of the excellent CefSharp project (version 67) to host a browser in our WPF application.
Making use of CefSharp causes child CefSharp.BrowserSubprocess processes to be started, which is by design.
These processes are stopped if I cleanly exit my application and call Cef.Shutdown() as recommended in the documentation:
// Hook up handler earlier in application
Application.Current.Exit += OnApplicationExit;
...
private void OnApplicationExit(object sender, ExitEventArgs e)
{
if (Dispatcher.CheckAccess() == false)
{
Dispatcher.Invoke(() => OnApplicationExit(sender, e));
return;
}
// Stops CefSharp.BrowserSubprocess processes
Cef.Shutdown();
}
I've noticed that if the application is killed, the CefSharp.BrowserSubprocess responsible for rendering is left running and starts using a lot of CPU and does so indefinitely.
I can add some code to handle this, checking for any orphaned CefSharp.BrowserSubprocess process, and kill them. I'm wondering if there's a better option though?
It would be great if the process itself could perform a period check itself and kill itself, perhaps as a setting.

As answered by #amaitland, the following setting should be set to monitor the parent process:
CefSharpSettings.SubprocessExitIfParentProcessClosed = true;

Related

IIS slow multithreading

We have a .NET application which is calling over OpenRia services on the server (IIS). This web service call is running a heavy calculation, where we are loading over LoadLibrary some DLL's, which we need to solve some linear systems. We need to go over a list of 1000 events. Every single event is a separate calculation and can be run independently from each other.
What we are doing is, that we create on a 64-core machine 60 tasks and every task is taking one event => run the calculation => take the next event => run the calculation and so on until the list is empty.
As soon the list is empty our calculation is finished.
We have now the strange behaviour that on the first run the calculation seems to run fast, but when we run the same calculation again it's getting slower on every run.
If we restart the server the calculation is running fast again.
We have done an analysis with PerfView and we have seen that on the second/third/fourth run the used threads from the IIS worker process are less than at the beginning.
On the first run the IIS worker process is using 60 threads (as we have defined) and on the second the process is using less than 60. On every run the actual threads used are less and less.
The first run the calculation needs around 3min. The second run we need 6min and the third run we are already around 15min.
What could be the problem? I have tried to use the ThreadPool, but I have the same effect as with the Tasks.
Here is some sample code:
//This part of code is called after the web service call
ConcurrentStack<int> events = new ConcurrentStack<int>();//This is a list of 1000 entries
ParallelOptions options = new ParallelOptions();
int interfacesDone = 0;
Task[] tasks = new Task[options.MaxDegreeOfParallelism];
for (int i = 0; i < options.MaxDegreeOfParallelism; i++)
{
tasks[i] = Task.Run(() =>
{
StartAnalysis(events);
});
}
Task.WaitAll(tasks);
private void StartAnalysis(ConcurrentStack<int> events)
{
while (!events.IsEmpty)
{
int index;
if (events.TryPop(out index))
{
DoHeavyCalculation();
}
}
}
ASP.NET processes requests by using threads from the .NET thread pool. The thread pool maintains a pool of threads that have already incurred the thread initialization costs.
Therefore, these threads are easy to reuse. The .NET thread pool is also self-tuning. It monitors CPU and other resource utilization, and it adds new threads or trims the thread pool size as needed.

Why does Vertx throws a warning even with blocking attribute?

I have a Quarkus application where I use the event bus.
the code in question looks like this:
#ConsumeEvent(value = "execution-request", blocking = true)
#Transactional
#TransactionConfiguration(timeout = 3600)
public void consume(final Message<ExecutionRequest> msg) {
try {
execute(...);
} catch (final Exception e) {
// some logging
}
}
private void execute(...)
throws InterruptedException {
// it actually runs a long running task, but for
// this example this has the same effect
Thread.sleep(65000);
}
Why do I still get a
WARN [io.ver.cor.imp.BlockedThreadChecker] (vertx-blocked-thread-checker) Thread Thread[vert.x-worker-thread-0,5,main] has been blocked for 63066 ms, time limit is 60000 ms: io.vertx.core.VertxException: Thread blocked
I'm I doing something wrong? Is the blocking parameter at the ConsumeEvent annotation not enough to let that handle in a separate Worker?
Your annotation is working as designed; the method is running in a worker thread. You can tell by both the name of the thread "vert.x-worker-thread-0", and by the 60 second timeout before the warnings were logged. The eventloop thread only has a 3 second timeout, I believe.
The default Vert.x worker thread pool is not designed for "very" long running blocking code, as stated in their docs:
Warning:
Blocking code should block for a reasonable amount of time (i.e no more than a few seconds). Long blocking operations or polling operations (i.e a thread that spin in a loop polling events in a blocking fashion) are precluded. When the blocking operation lasts more than the 10 seconds, a message will be printed on the console by the blocked thread checker. Long blocking operations should use a dedicated thread managed by the application, which can interact with verticles using the event-bus or runOnContext
That message mentions blocking for more than 10 seconds triggers a warning, but I think that's a typo; the default is actually 60.
To avoid the warning, you'll need to create a dedicated WorkerExecutor (via vertx.createSharedWorkerExecutor) configured with a very high maxExcecuteTime. However, it does not appear you can tell the #ConsumeEvent annotation to use it instead of the default worker pool, so you'd need to manually create an event bus consumer, as well, or use a regular #ConsumeEvent annotation, but call workerExectur.executeBlocking inside of it.

WinRT/UWP is suspending rising on power off?

I am trying to save data before application (WinRT 8.1) close/sleep/minimze (or windows shutdown/restart in tablet with WIN10) using app suspending event.
https://learn.microsoft.com/en-us/windows/uwp/launch-resume/suspend-an-app
However, it os not working on power off / shutdown WinRT/UWP: Is suspending action rising on long power off button holding?
The Suspending lifecycle event will fire in case of a normal OS shutdown - if you do Start -> Shut down.
This is unfortunately not the case with long power off button holding and restart button press, because both these are improper ways of shutting down your PC. Holding power button to shutdown essentially suddenly "cuts-down power" to the PC, which means the OS cannot respond to this and all unsaved data are lost. This method of shutting down a PC should be used only when something really bad happens and everything freezes. That is why the UWP app has no chance to run the suspending event handler in this case.
Is suspending action rising on long power off button holding?
System will shutdown forcibly with long press power off button. And the system could not make sure Current user session is finished. So the suspending event handler could not be invoked correctly.
From Windows 10 universal Windows platform (UWP) app lifecycle:
Current user session is based on Windows logon. As long as the current user hasn't logged off, shut down, or restarted Windows, the current user session persists across events such as lock screen authentication, switch-user, and so on.
So, before shut down, the app still in the Current user session. And Suspended will be invoked on power off(shut down).
Note, you can not test it in debug model within Visual Studio. Because when you shut down the system, Visual Studio will exit degbug model at first. The Suspended event will not be invoked as expect. You could verify with following code.
private void OnSuspending(object sender, SuspendingEventArgs e)
{
var stringBulider = new StringBuilder();
var deferral = e.SuspendingOperation.GetDeferral();
Windows.Storage.ApplicationDataContainer localSettings = Windows.Storage.ApplicationData.Current.LocalSettings;
Object value = localSettings.Values["exampleSetting"];
stringBulider.Append(value.ToString() + "/Next");
localSettings.Values["exampleSetting"] = stringBulider.ToString();
deferral.Complete();
}
Each time you shut down, the stringBulider will be append one at a time.

Hangfire Shutdown Not Waiting to Kill Process

We have configured Hangfire to run as part of our web app using OWIN as provided in the tutorial.
We enqueue long running background process via an API we provide. The job we run initializes a R process in the background using the.Net Process class. The R code internally spawns a number of processes internally to finish the job faster. We get a number of Rscript process running in Task manager when the job runs.
On manually Recycling the app pool of our web app(to see how process restart works) the Rscript Processes are not killed. We have a custom kill strategy we to get rid of all the Rscript process within our code.
while (IsNotTimedOut())
{
try
{
_token.ThrowIfCancellationRequested();
Thread.Sleep(2000);
}
catch (OperationCanceledException)
{
Kill();
throw;
}
}
Within the kill method, we block using Process.WaitForExit() method.
When we do the manual recycle all the process are not killed. The current thread instead of blocking for the process to kill just dies after killing a couple of Rscript processes.
The hangfire code seems to just cancel the token and it doesn't seem to wait for the process to get killed that are listening on the cancellation token. Please, can someone suggest how do we get this working, Please let me know if more details are required?

Why doesn't this item get removed from the cache when memory usage is high?

I'm looking for a way to hook in to the garbage collection routine in ASP.NET's caching mechanism. As a test I put the following code in a .aspx file. I would expect the CacheItemRemoveCallback to be fired but it isn't. Does anyone know why? Is it because I'm running under MS Web Development Server instead of a full fledged instance of IIS?
private static List<string> _bigData = new List<string>();
protected void Page_Load(object sender, EventArgs e)
{
Cache.Add("a", " ".PadRight(1048576), null, DateTime.Now.AddYears(1), System.Web.Caching.Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Low,
new System.Web.Caching.CacheItemRemovedCallback((s, o, r) => {
// This code is never run
}));
while (true)
{
_bigData.Add(" ".PadRight(1048576));
Thread.Sleep(50);
}
}
I would expect the CacheItemRemoveCallback to be fired
Why do you expect that?
I'd expect you to get an OutOfMemoryException fairly quickly on a 32-bit machine (*) - after a minute or so, by which time you'll have approx 1.2GB in your list.
(*) unless the OS is started with the /3GB switch, in which case it will behave similarly to a 32-bit process on a 64-bit machine.
On a 64-bit machine, your request will time when the a default of 90 seconds, by which time it will have added 90*200 = 1800 items = approx 1.8GB to your static list. A 64-bit process will handle this, and probably a 32-bit process would be able to do so on a 64-bit machine if it is LARGEADDRESSAWARE, which is definitely the case for IIS; not sure about Cassini.
Also, ISS would probably recycle your application domain when it hits this level of virtual memory usage, depending on how the application pool is configured.
I'd change your test to repeatedly add items to the cache, rather than a static list.

Resources