Should a non-blocking code push CPU to 100% - cpu-usage

I've been thinking today about NodeJS and it attitude towards blocking, it got me thinking, if a block of code is purely non-blocking, say calculating some real long alogirthm and variables are all present in the stack etc.. should this push a single core non hyperthreaded to CPU as Windows Task Manager defines it to 100% as it aims to complete this task as quickly as possible? Say that this is generally calculation that can take minutes.

Yes, it should. The algorithm should run as fast as it can. It's the operating system's job to schedule time to other processes if necessary.

If your non-blocking computation intensive code doesn't use 100% of the CPU then you are wasting cycles in the idle task. It always irritates me to see the idle task using 99% of the CPU.

As long as the CPU is "given" to other processes when there are some that need it to do their calculations, I suppose it's OK : why not use the CPU if it's available and there is some work to do ?

As RAM can be paged out to disk, all applications are potentially blocking. This would happen if the algorithm uses more RAM than available on the system. As a result, it won't hit 100%.

Related

dhrystone results are same

I am executing dhrystone 2.1 on freescale IMX6 quad processor with 1GHz.Below are the things I tried.
1. Executed dhrystone alone first time.
2. With an application running in the background, I executed dhrystone.
In either cases I am getting DMIPS value same. I do not understand. In second case DMIPS should reduce.Please let me know
You should think about why you expect the dhrystone benchmark to perform worse with another program running in the background. If you want them to fight for cpu time, you need to make sure they are bother scheduled on the same core, because if they are scheduled on different cores, they will each receive 100% cpu time. Another reason your could expect dhrystone to run slower with an app in the background would be shared cache collisions, or memory bandwidth. Both of these reason though I would disqualify for this discussion, because dhrystone is an extremely simple benchmark, which doesn't require much memory bandwidth or cache space. So your best way of slowing it down is scheduling the other app on the same core and restricting them both, so they can't be scheduled elsewhere. For more info on how to perform dhrystone benchmarking for arm refer to this document:
http://infocenter.arm.com/help/topic/com.arm.doc.dai0273a/DAI0273A_dhrystone_benchmarking.pdf

Does high CPU usage indicate a software module is designed wrong

We have a process that is computationally intensive. When it runs it typically it uses 99% of the available CPU. It is configured to take advantage of all available processors so I believe this is OK. However, one of our customers is complaining because alarms go off on the server on which this process is running because of the high CPU utilization. I think that there is nothing wrong with high CPU utilization per se. The CPU drops back to normal when the process stops running and the process does run to completion (no infinite loops, etc.). I'm just wondering if I am on solid ground when I say that high CPU usage is not a problem per se.
Thank you,
Elliott
if I am on solid ground when I say that high CPU usage is not a problem per se
You are on solid ground.
We have a process that is computationally intensive
Then I'd expect high CPU usage.
The CPU drops back to normal when the process stops running
Sounds good so far.
Chances are that the systems you client are using are configured to notify when the CPU usage goes over a certain limit, as sometimes this is indicative of a problem (and sustained high usage can cause over heating and associated problems).
If this is expected behavior, your client needs to adjust their monitoring - but you need to ensure that the behavior is as expected on their systems and that it is not likely to cause problems (ensure that high CPU usage is not sustained).
Alarm is not a viable reason for poor design. The real reason may be that it chokes other tasks on the system. Modern OSes usually take care of this by lowering dynamic priority of the CPU hungry process in such a way that others that are less demanding of CPU time will get higher priority. You may tell the customer to "nice" the process to start with, since you probably don't care if it runs 10 mins of 12 mins. Just my 2 cents :)

How can I avoid App Pool termination during long-time ASP.NET calculations?

My hosting provider gave me 50% cpu limitation. I'm trying to use DotNetZip to backup my DNN portal files -- the collection of more than 16000 files of 600Mb disk-space. I'm using a separate thread with the lowest priority for compression. If the processor is loaded enough then my thread works fine but when the processor is more or less free I rich quickly my CPU limitation (50%) and finally the pool terminates and needs to be recycled.
So I need an idea how I can slow down the thread in order not to exceed the cpu limitation.
Thanks.
If you just want to keep the thread below 50% utilization, sprinkle Thread.Sleep(x) throughout the code. You'll need to figure out how many of these you need, and what the millisecond delay should be -- and that's only if you wrote the code that needs the Sleep() calls...
That said, your situation sounds rather odd. There should be a better way to make your backup.
The simplest approach would probably be to throw in a Sleep regularly, with enough time specified so that thread cannot, even if nothing else is ready to run, it is using less that half of one core.
workerThread.Suspend ()
Thread.Sleep (500);
workerThread.Resume ();
Watch out, though. Caution, there is potential to cause deadlocks or unintended slowdowns if the worker thread is in a critical section while you suspend it. For more information:
MSDN System.Threading.Thread.Suspend

"System Idle Process" eats CPU on a high threading application

I have a multi-threaded web application with about 1000~2000 threads at production environment.
I expect CPU usage on w3wp.exe but System Idle Process eats CPU. Why?
The Idle process isn't actually a real process, it doesn't "eat" your CPU time. the %cpu you see next to it is actually unused %cpu (more or less).
The reason for the poor performance of your application is most likely due to your 2000 threads. Windows (or indeed any operating system) was never meant to run so many threads at a time. You're wasting most of the time just context switching between them, each getting a couple of milliseconds of processing time every ~30 seconds (15ms*2000=30sec!!!!).
Rethink your application.
the idle process is simply holding process time until a program needs it, its not actually eating any cycles at all. you can think the system idle time as 'available cpu'
System Idle Process is not a real process, it represents unused processor time.
This means that your app doesn't utilize the processor completely - it may be memory-bound or CPU-bound; possibly the threads are waiting for each other, or for external resources? Context switching overhead could also be a culprit - unless you have 2000 cores, the threads are not actually running all at the same time, but assigned time slices by the task scheduler, this also takes some time.
You have not provided a lot of details so I can only speculate at this point. I would say it is likely that most of those threads are doing nothing. The ones that are doing something are probably IO bound meaning that they are spending most of their waiting for the external resource to respond.
Now lets talk about the "1000~2000 threads". There are very few cases (maybe none) where having that many threads is a good idea. I think your current issue is a perfect example of why. Most of those threads are (apparently anyway) doing nothing but wasting resources. If you want to process multiple tasks in parallel, espcially if they are IO bound, then it is better to take advantage of pooled resources like the ThreadPool or by using the Task Parallel Library.

How to go from CPU time to CPU utilization?

I'm trying to recognize a run away threads in my own application and close them for good before they render machine inaccessible.
However, I can only get CPU time for the thread, that is limitation of API I'm using. Is there any way to evaluate CPU utilization from that data?
I was thinking about comparing it to real time and if it is close - than that thread is loading CPU too much. What do you think about that heuristic, will it work?
CPU time divided by real time will give you CPU utilization.

Resources