I have a process that allocates about 20GB of RAM on a 32GB machine. After some events, I'm streaming the data from the parent process to stdin of the child process. It's mandatory to keep the 20GB of data in the parent process at the point when the child is spawned.
The app is written in Rust and I'm calling Command::new('path/to/command') to create the child process.
When I spawn the child process the operating system is trapping an out-of-memory error.
strace output:
[pid 747] 16:04:41.128377 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7ff4c7f87b10) = -1 ENOMEM (Cannot allocate memory)
Why does the trap occur? The child process should not consume more than 1GB and exec() is called immediately after clone().
The Problem
When a child process is created by the Rust call, several things happen at a C/C++ level. This is a simplification, but it will help explain the dilemma.
The streams are duplicated (with dup2 or a similar call)
The parent process is forked (with the fork or clone system call)
The forked process executes the child (with call from the execvp family)
The parent and child are now concurrent processes. The Rust call you are currently using appears to be a clone call that is behaving much like a pure fork, so you're 20G x 2 - 32G = 8G short, without considering the space needed by the operating system and anything else that might be running. The clone call is returning with a negative return value and errno is set by the call to ENOMEM errno.
If the architectural solutions of adding physical memory, compressing the data, or streaming it through a process that does not require the entirety of it to be in memory at any one time are not options, then the classic solution is reasonably simple.
Recommendation
Design the parent process to be lean. Then spawn two worker children, one that handles your 20GB need and the other that handles your 1 GB need1. These children can be connected to one another via pipe, file, shared memory, socket, semaphore, signalling, and/or other communication mechanism(s), just as a parent and child can be.
Many mature software packages from Apache httpd to embedded cell tower routing daemons use this design pattern. It is reliable, maintainable, extensible, and portable.
The 32G would then likely suffice for the 20G and 1G processing needs, along with OS and lean parent process.
Although this solution will surely solve your problem, if the code is to be reused or extended later, there may be value in looking into potential process design changes involving data frames or multidimensional slices to support streaming of data and memory requirement reductions.
Memory Overcommit Always
Setting overcommit_memory to 1 eliminates the clone error condition referenced in the question because the Rust call calls the LINUX clone call that reads that setting. But there are several caveats with this solution that point back to the above recommendation as superior, primarily that the value of 1 is dangerous, especially for production environments.
Background
Kernel discussions about OpenBSD rfork and the clone call ensued in the late 1990s and early 2000s. The features stemming from those discussions permit less extreme forking than processes, which is symmetrically like the provision of more extensive independence between pthreads. Some of these discussions have produced extensions to the traditional process spawning that have entered POSIX standardization.
In the early 2000s, Linux Torvalds suggested a flag structure to determine what components of the execution model are shared and what are copied when execution forks, blurring the distinction between processes and threads. From this, the clone call emerged.
Over-committing memory is not discussed much if any in those threads. The design goal was MORE control of the results of a fork rather than the delegation of memory usage optimization to an operating system heuristic, which is what the default setting of overcommit_memory = 0 does.
Caveats
Memory overcommit goes beyond these extensions, adding the complexity of trade-offs of its modes2, design trend caveats3, practical run time limitations4, and performance impacts5.
Portability and Longevity
Additionally, without standardization, the code using memory overcommit may not be portable, and the question of longevity is pertinent, especially when a setting controls the behavior of a function. There is no guarantee of backward compatibility or even some warning of deprication if the setting system changes.
Danger
The linuxdevcenter documentation2 says, "1 always overcommits. Perhaps you now realize the danger of this mode.", and there are other indications of danger with ALWAYS overcommitting 6, 7.
The implementers of overcommit on LINUX, Windows, and VMWare may guarantee reliability, but it is a statistical game that, combined with the many other complexities of process control, may lead to certain unstable characteristics under certain conditions. Even the name overcommit tells us something about its true character as a practice.
A non-default overcommit_memory mode, for which several warnings are issues, but works for the immediate trial of the immediate case may later lead to intermittent reliability.
Predictability and Its Impact on System Reliability and Response Time Consistency
The idea of a process in a UNIX like operating system, from its Bell Labs beginnings, is that a process makes a concrete requests to its container, the operating system. The result both predictable and binary. Either the request is denied or granted. Once granted, the process is given complete control and direct access over the resources until the use of it is relinquished by the process.
The swap space aspect of virtual memory is a breach of this principle that appears as gross deceleration of activity on workstations, when RAM is heavily consumed. For instance, there are times during development when one presses a key and has to wait ten seconds to see the character on the display.
Conclusion
There are many ways to get the most out of physical memory, but doing so by hoping that use of memory allocated will be sparse will likely introduce negative impacts. Performance hits from swapping when overcommit is overused is the well documented example. If you are keeping 20G of data in RAM, this may particularly be the case.
Only allocating what is needed, forking in intelligent ways, using threads, and freeing memory that is surely no longer needed lead to memory thrift without impacting reliability, creating spikes in swap disk usage, and can operate without caveat up to the limits of system resources.
The position of the designer of the Command::new call may be based on this perspective. In this case, how soon after the fork the exec is called is not a determining factor in how much memory is requested during the spawn.
Notes and References
[1] Spawning worker children may require some code refactoring and appear to be too much trouble on a superficial level, but the refactoring may be surprisingly straightforward and significantly beneficial.
[2] http://www.linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html?page=2
[3] https://www.etalabs.net/overcommit.html
[4] http://www.gabesvirtualworld.com/memory-overcommit-in-production-yes-yes-yes/
[5] https://labs.vmware.com/vmtj/memory-overcommitment-in-the-esx-server
[6] https://github.com/kubernetes/kubernetes/issues/14452
[7] http://linuxtoolkit.blogspot.com/2011_08_01_archive.html
The idea behind Intels hyperthreading is (as far as I understand) that one core is used for two threads in a time-multiplexed manner.
The HW support this by having the state-related resources doubled and time-sharing other resources. If the running thread stalls (e.g. because it has to fetch new data from RAM), the other thread gets access to the shared resources. The result is a better utilization of the shared resources.
So if one thread isn't ready, the other thread is allowed to run. In other words - a thread switch can happen when the executing thread stalls.
I've tried to find out what will happen if both threads are ready for a long time but I haven't been able to find the information.
What happens if the running thread doesn't stall?
Will the running thread continue as long as it is ready?
Will the core switch to the other thread after some time? If so - what is the criteria for the switch? Is it controlled by HW or SW?
Hyperthreading is simultaneous multithreading (SMT). So it doesn't just switch back and forth on some relatively coarse-grain scale (like stalls), in the case of Sandy Bridge and newer, the fetcher and the decoder alternate between the threads. Execution units are shared competitively, so even if neither thread is stalling they can still together achieve a better utilization than if they ran alone (but that's not typical). So the problems you identified don't apply, because it doesn't work like that in the first place.
We are using ASP.NET but sometimes our applications using very much resources of CPU or RAM. I want to restrict resources for every virtual directory/web site and get alert when they reached the alert level.
I don't know I can do this for IIS but I wonder is this possible for other web servers like apache?
In Java, you cannot have the Java Virtual Machine enforce CPU time limits for certain threads: the best you can do is set the thread priority for any threads that you create. If there is no other work to be done, then a thread will burn as many CPU cycles as it can get unless is it blocking on something (I/O, synchronization monitor, etc.).
Using JMX, you can get some information about threads to possibly detect some runaway-thread situation, but you can't directly control the amount of CPU Time allowed for the thread.
If you are willing to re-architect your dynamic-content and other services, you could write them in such a way that they can support a "unit of work" that is somehow less than the full request's worth of work, and then you can have your request-processing thread execute a unit of work and then sleep an appropriate amount of time. You can lessen your CPU usage by doing this, but you will also certainly slow-down response times measurably.
Is there anything wrong with a fully-utilized CPU if your users are happy? Perhaps the real solution to your problem is more or bigger hardware.
In my ASP.NET MVC application I have a number of threads that wait for a certain length of time and wake up to do some clean tasks over and over. I have not deployed this application to a production server yet but on my dev machine they seem to work as expected. For these threads to work the same on IIS7 do I need to look out for anything? Will IIS7 keep my threads alive indefinitely? are there implications to worry about?
Also I want to queue, lets say 50 objects that were created through various requests and process them all in one go. I'd like to maintain them inside a list and then process the list which means that the list object has to be kept alive indefinitely. I'd like to avoid serializing my objects into the DB in order to maintain this queue. What is the correct way of achieving this?
Will IIS7 keep my threads alive
indefinitely?
No, if the application pool recycles (if there's a long inactivity or some memory threshold is hit) those threads will be stopped as the application will be unloaded from memory. If those objects are so much precise I wouldn't recommend you keeping them in memory but rather serialize them to some persistent storage so that they could be processed later in case of failure.
The design you describe is fine when you don't mind losing cached commands in the queue. Otherwise it would be better to go with a different design. ASP.NET isn't suited for this type of processing, because IIS can recycle the process. When that happens you lose your in-memory queue. IIS could also decide to unload the AppDomain because no new requests are coming in. In that case your threads will also stop running which means that pending operations will still not been cached, even when you use a persisted queue.
You'd probably be better of with some sort of transactional queue, such as MSMQ or a custom table in the database (or look at the open source NServiceBus). Adding operations to the queue can be done by your web application and processing items can be done within a Windows service application that will not be recycled and can process the queue in a transactional way.
Since you're talking about multiple threads: when using a Windows service you can build it in such way that it can run multiple threads or make it single threaded and run several instances of the same thread. This is a very flexible design that I used successfully in the past to distribute CPU and disk intensive operations over multiple machines.
I've always been under the impression that using the ThreadPool for (let's say non-critical) short-lived background tasks was considered best practice, even in ASP.NET, but then I came across this article that seems to suggest otherwise - the argument being that you should leave the ThreadPool to deal with ASP.NET related requests.
So here's how I've been doing small asynchronous tasks so far:
ThreadPool.QueueUserWorkItem(s => PostLog(logEvent))
And the article is suggesting instead to create a thread explicitly, similar to:
new Thread(() => PostLog(logEvent)){ IsBackground = true }.Start()
The first method has the advantage of being managed and bounded, but there's the potential (if the article is correct) that the background tasks are then vying for threads with ASP.NET request-handlers. The second method frees up the ThreadPool, but at the cost of being unbounded and thus potentially using up too many resources.
So my question is, is the advice in the article correct?
If your site was getting so much traffic that your ThreadPool was getting full, then is it better to go out-of-band, or would a full ThreadPool imply that you're getting to the limit of your resources anyway, in which case you shouldn't be trying to start your own threads?
Clarification: I'm just asking in the scope of small non-critical asynchronous tasks (eg, remote logging), not expensive work items that would require a separate process (in these cases I agree you'll need a more robust solution).
Other answers here seem to be leaving out the most important point:
Unless you are trying to parallelize a CPU-intensive operation in order to get it done faster on a low-load site, there is no point in using a worker thread at all.
That goes for both free threads, created by new Thread(...), and worker threads in the ThreadPool that respond to QueueUserWorkItem requests.
Yes, it's true, you can starve the ThreadPool in an ASP.NET process by queuing too many work items. It will prevent ASP.NET from processing further requests. The information in the article is accurate in that respect; the same thread pool used for QueueUserWorkItem is also used to serve requests.
But if you are actually queuing enough work items to cause this starvation, then you should be starving the thread pool! If you are running literally hundreds of CPU-intensive operations at the same time, what good would it do to have another worker thread to serve an ASP.NET request, when the machine is already overloaded? If you're running into this situation, you need to redesign completely!
Most of the time I see or hear about multi-threaded code being inappropriately used in ASP.NET, it's not for queuing CPU-intensive work. It's for queuing I/O-bound work. And if you want to do I/O work, then you should be using an I/O thread (I/O Completion Port).
Specifically, you should be using the async callbacks supported by whatever library class you're using. These methods are always very clearly labeled; they start with the words Begin and End. As in Stream.BeginRead, Socket.BeginConnect, WebRequest.BeginGetResponse, and so on.
These methods do use the ThreadPool, but they use IOCPs, which do not interfere with ASP.NET requests. They are a special kind of lightweight thread that can be "woken up" by an interrupt signal from the I/O system. And in an ASP.NET application, you normally have one I/O thread for each worker thread, so every single request can have one async operation queued up. That's literally hundreds of async operations without any significant performance degradation (assuming the I/O subsystem can keep up). It's way more than you'll ever need.
Just keep in mind that async delegates do not work this way - they'll end up using a worker thread, just like ThreadPool.QueueUserWorkItem. It's only the built-in async methods of the .NET Framework library classes that are capable of doing this. You can do it yourself, but it's complicated and a little bit dangerous and probably beyond the scope of this discussion.
The best answer to this question, in my opinion, is don't use the ThreadPool or a background Thread instance in ASP.NET. It's not at all like spinning up a thread in a Windows Forms application, where you do it to keep the UI responsive and don't care about how efficient it is. In ASP.NET, your concern is throughput, and all that context switching on all those worker threads is absolutely going to kill your throughput whether you use the ThreadPool or not.
Please, if you find yourself writing threading code in ASP.NET - consider whether or not it could be rewritten to use pre-existing asynchronous methods, and if it can't, then please consider whether or not you really, truly need the code to run in a background thread at all. In the majority of cases, you will probably be adding complexity for no net benefit.
Per Thomas Marquadt of the ASP.NET team at Microsoft, it is safe to use the ASP.NET ThreadPool (QueueUserWorkItem).
From the article:
Q) If my ASP.NET Application uses CLR ThreadPool threads, won’t I starve ASP.NET, which also uses the CLR ThreadPool to execute requests?
..
A) To summarize, don’t worry about
starving ASP.NET of threads, and if
you think there’s a problem here let
me know and we’ll take care of it.
Q) Should I create my own threads
(new Thread)? Won’t this be better
for ASP.NET, since it uses the CLR
ThreadPool.
A) Please don’t. Or to put it a
different way, no!!! If you’re really
smart—much smarter than me—then you
can create your own threads;
otherwise, don’t even think about it.
Here are some reasons why you should
not frequently create new threads:
It is very expensive, compared to
QueueUserWorkItem...By the way, if you can write a better ThreadPool than the CLR’s, I encourage you to apply for a job at Microsoft, because we’re definitely looking for people like you!.
Websites shouldn't go around spawning threads.
You typically move this functionality out into a Windows Service that you then communicate with (I use MSMQ to talk to them).
-- Edit
I described an implementation here: Queue-Based Background Processing in ASP.NET MVC Web Application
-- Edit
To expand why this is even better than just threads:
Using MSMQ, you can communicate to another server. You can write to a queue across machines, so if you determine, for some reason, that your background task is using up the resources of the main server too much, you can just shift it quite trivially.
It also allows you to batch-process whatever task you were trying to do (send emails/whatever).
I definitely think that general practice for quick, low-priority asynchronous work in ASP.NET would be to use the .NET thread pool, particularly for high-traffic scenarios as you want your resources to be bounded.
Also, the implementation of threading is hidden - if you start spawning your own threads, you have to manage them properly as well. Not saying you couldn't do it, but why reinvent that wheel?
If performance becomes an issue, and you can establish that the thread pool is the limiting factor (and not database connections, outgoing network connections, memory, page timeouts etc) then you tweak the thread pool configuration to allow more worker threads, higher queued requests, etc.
If you don't have a performance problem then choosing to spawn new threads to reduce contention with the ASP.NET request queue is classic premature optimization.
Ideally you wouldn't need to use a separate thread to do a logging operation though - just enable the original thread to complete the operation as quickly as possible, which is where MSMQ and a separate consumer thread / process come in to the picture. I agree that this is heavier and more work to implement, but you really need the durability here - the volatility of a shared, in-memory queue will quickly wear out its welcome.
You should use QueueUserWorkItem, and avoid creating new threads like you would avoid the plague. For a visual that explains why you won't starve ASP.NET, since it uses the same ThreadPool, imagine a very skilled juggler using two hands to keep a half dozen bowling pins, swords, or whatever in flight. For a visual of why creating your own threads is bad, imagine what happens in Seattle at rush hour when heavily used entrance ramps to the highway allow vehicles to enter traffic immediately instead of using a light and limiting the number of entrances to one every few seconds. Finally, for a detailed explanation, please see this link:
http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx
Thanks,
Thomas
That article is not correct. ASP.NET has it's own pool of threads, managed worker threads, for serving ASP.NET requests. This pool is usually a few hundred threads and is separate from the ThreadPool pool, which is some smaller multiple of processors.
Using ThreadPool in ASP.NET will not interfere with ASP.NET worker threads. Using ThreadPool is fine.
It would also be acceptable to setup a single thread which is just for logging messages and using producer/consumer pattern to pass logs messages to that thread. In that case, since the thread is long-lived, you should create a single new thread to run the logging.
Using a new thread for every message is definitely overkill.
Another alternative, if you're only talking about logging, is to use a library like log4net. It handles logging in a separate thread and takes care of all the context issues that could come up in that scenario.
I'd say the article is wrong. If you're running a large .NET shop you can safely use the pool across multiple apps and multiple websites (using seperate app pools), simply based on one statement in the ThreadPool documentation:
There is one thread pool per process.
The thread pool has a default size of
250 worker threads per available
processor, and 1000 I/O completion
threads. The number of threads in the
thread pool can be changed by using
the SetMaxThreads method. Each thread
uses the default stack size and runs
at the default priority.
I was asked a similar question at work last week and I'll give you the same answer. Why are you multi threading web applications per request? A web server is a fantastic system optimized heavily to provide many requests in a timely fashion (i.e. multi threading). Think of what happens when you request almost any page on the web.
A request is made for some page
Html is served back
The Html tells the client to make further requets (js, css, images, etc..)
Further information is served back
You give the example of remote logging, but that should be a concern of your logger. An asynchronous process should be in place to receive messages in a timely fashion. Sam even points out that your logger (log4net) should already support this.
Sam is also correct in that using the Thread Pool on the CLR will not cause issues with the thread pool in IIS. The thing to be concerned with here though, is that you are not spawning threads from a process, you are spawning new threads off of IIS threadpool threads. There is a difference and the distinction is important.
Threads vs Process
Both threads and processes are methods
of parallelizing an application.
However, processes are independent
execution units that contain their own
state information, use their own
address spaces, and only interact with
each other via interprocess
communication mechanisms (generally
managed by the operating system).
Applications are typically divided
into processes during the design
phase, and a master process explicitly
spawns sub-processes when it makes
sense to logically separate
significant application functionality.
Processes, in other words, are an
architectural construct.
By contrast, a thread is a coding
construct that doesn't affect the
architecture of an application. A
single process might contains multiple
threads; all threads within a process
share the same state and same memory
space, and can communicate with each
other directly, because they share the
same variables.
Source
You can use Parallel.For or Parallel.ForEach and define the limit of possible threads you want to allocate to run smoothly and prevent pool starvation.
However, being run in background you will need to use pure TPL style below in ASP.Net web application.
var ts = new CancellationTokenSource();
CancellationToken ct = ts.Token;
ParallelOptions po = new ParallelOptions();
po.CancellationToken = ts.Token;
po.MaxDegreeOfParallelism = 6; //limit here
Task.Factory.StartNew(()=>
{
Parallel.ForEach(collectionList, po, (collectionItem) =>
{
//Code Here PostLog(logEvent);
}
});
I do not agree with the referenced article(C#feeds.com). It is easy to create a new thread but dangerous. The optimal number of active threads to run on a single core is actually surprisingly low - less than 10. It is way too easy to cause the machine to waste time switching threads if threads are created for minor tasks. Threads are a resource that REQUIRE management. The WorkItem abstraction is there to handle this.
There is a trade off here between reducing the number of threads available for requests and creating too many threads to allow any of them to process efficiently. This is a very dynamic situation but I think one that should be actively managed (in this case by the thread pool) rather than leaving it to the processer to stay ahead of the creation of threads.
Finally the article makes some pretty sweeping statements about the dangers of using the ThreadPool but it really needs something concrete to back them up.
Whether or not IIS uses the same ThreadPool to handle incoming requests seems hard to get a definitive answer to, and also seems to have changed over versions. So it would seem like a good idea not to use ThreadPool threads excessively, so that IIS has a lot of them available. On the other hand, spawning your own thread for every little task seems like a bad idea. Presumably, you have some sort of locking in your logging, so only one thread could progress at a time, and the rest would just take turns getting scheduled and unscheduled (not to mention the overhead of spawning a new thread). Essentially, you run into the exact problems the ThreadPool was designed to avoid.
It seems that a reasonable compromise would be for your app to allocate a single logging thread that you could pass messages to. You would want to be careful that sending messages is as fast as possible so that you don't slow down your app.