SandCastle and OutOfMemoryException - out-of-memory

SandCastle is unable to process our class library because it crashes with an OutOfMemoryException exception during XSL transformation.
What can we do about that, except for the obvious, buy more memory? The problem is that this is our developer machines with 3.3GB memory on a 32-bit OS so basically we either have to upgrade to 64-bit and more memory (which won't happen for a while) or set up a virtual server to do this with lots of memory (which will impact production servers).
I seriously doubt we have the biggest class library in the world that requires help files, so what options do we have? Is there a magic "Do not crash with out of memory errors" setting that we forgot to turn off?

If you're on 32 bit Windows, your user process will only be able to address 2 GB memory per default (3 GB is run as large address aware). The 2 GB are used for everything in the process, so the .NET runtime, standard libraries, booking and so forth all take their chuck. In my experience that leaves about 1.5 GB for .NET applications on 32 bit.
You can get access to more memory by using the large address aware switch, but it doesn't come for free if you're on 32 bit Windows. Moving to 64 bit Windows will let your 32 bit application access the entire 32 bit address space and thus give you 4 GB addressable space.
I've recently written a blog entry with details about memory usage for .NET applications, but since I blog in Danish, you may not be able to read it. However, if you want to have a look, the link is: http://kodehoved.dk/?p=156
You may also find this MSDN blog post relevant: http://blogs.msdn.com/maoni/archive/2007/05/15/64-bit-vs-32-bit.aspx

Related

Out-of-memory-error on Minecraft Server with 16G RAM

Please excuse my inexperience, this is my first time on the site.
I have a Dell PowerEdge r710 with 2 Xeon L5630 CPUs and 16G RAM installed. I'm trying to host a Minecraft 1.7.10 Forge Server that runs perfectly fine on my Desktop, but refuses to run properly on the server.
This machine is running Java 8, and runs perfectly otherwise. When running the application without the mods, it loads up without a hitch. As I add more mods, it gets worse. As far as my (very, very limited) knowledge goes, the order of JVM arguments doesn't matter, and didn't on my Desktop, but in order to get the application to even run I had to change the order in my .bat file. With all mods installed, the Out Of Memory Error occurs with a chunk loading error when around 41% spawn loaded.
This is the .bat file that I've made to start the server:
java -jar minecraft_server.jar -Xms512M -Xmx8192M nogui -XX:+HeapDumpOnOutOfMemory
This should load up perfectly fine, everything is compatible and tested on another machine, but the exact same setup will not run on the r710, saying Out Of Memory with more than double the desktop's allocated memory.
First you should use Task Manager or a similar utility to make sure that Java process indeed is using more then the amount you allocated with your arguments. Then I would recommend reading through this lovely post written by Cpw and posted on Reddit. If it doesn't help you with your current situation it should at least give you a bit more information on Minecraft's memory footprint.
In a normal situation where you would be running Minecraft as a local server from your computer I would suggest taking a look at how much memory your GPU is taking up. Since you are running a server this is not relevant, but might still be useful to someone who stumbles upon this post so I will leave it here:
Your graphics card is probably the biggest address hog. Today's graphics adapters often contain a gigabyte or more of RAM, and every one of those bytes needs an address. To be fair, I doubt that many of those multi-gigabyte graphics cards are in 32-bit PCs, but even a 512mb video card will take a sizeable bite out of 4GB.
I am not quite familiar with running dedicated servers but another important thing that is worth mentioning is that in case you are on a 32-bit operating system you will only be able to take advantage of 4GB of your RAM due to architecture constraints.
Every byte of RAM requires its own address, and the processor limits the length of those addresses. A 32-bit processor uses addresses that are 32 bits long. There are only 4,294,967,296, or 4GB, possible 32-bit addresses.
If all else fails you should try to seek help on one of the available Discord channels dedicated to Minecraft modding. This should be a rule in general actually, especially for general purpose problems that are difficult for others to reproduce. Here is a small list of three Discord communities dedicated to Minecraft modding that I have experience with:
Modded Minecraft - The one with most traffic so it can be a bit more difficult for your question to get noticed on busy days, but definitely the best moderated one from this list.
Modding Help - The smallest of the three. I don't have much experience with this one.
Mod Dev Cafe - This one has a decent size and a pretty good response rate, but be prepared for the usual facepalms and other unpleasantness common to younger admins and moderators. However if you are willing to look past that this is a good choice.

My server has available memory but my zope process seems to be "capping" at 3GB. Increasing cache-size is not helping anymore

The screenshot below shows the change in cache related state of my zope instance over time (3 months so far).
We've increased the cache size several times over the period from 3000 all the way up to 6000000. With the exception of one recent blip, we have hit a ceiling of 30 Million (not sure what parameter that is) (see the 'by year' graph). This happened at a cache size of about 1000000, after which, changes to the cache size seemed to have no effect on the cached objects or memory usage of zope.
The zope/plone process moved from using about 500 MB of memory to using 3GB (we have 8GB on this server).
What I expected was that sliding the cache size upwards would allow zope to take advantage of more of the available server memory, but it is stuck at 3GB (out of a potential 8GB on the server).
Is there another setting that might be "capping" things at 3GB?
At a guess, your OS is limiting per-process memory size.
In a bash shell, check ulimit -v to see if a virtual memory limit is set. See man bash for the full list of options for that command.
See Limit memory usage for a single Linux process for more information on how to use ulimit.
I don't know what's going on with the memory of your server but you are doing this the wrong way: you simply can't have 6 million objects on memory, that's impossible: on a typical installation in Plone 4.x you should need somewhere in between 50 GB and 150 GB of memory for that.
to solve this we need more information: which Plone version are you using? how many objects do you have in your database? how many threads? what's the architecture of your server, 32 or 64-bit?
second, be sure to install the latest version of the munin.zope plugin to collect reliable information about your server (hat tip to Leo Rochael).
third, read this thread on the core developers list to understand how to calculate a more realistic number for your cache size (hat tip to Hanno Schlichting).
fourth, move the number up slowly and take time to monitor the results; check for total number of objects in memory and avoid memory swaps at any cost. You can stop increasing cache size if you see the number of objects is below your goal value. remember: you're never gonna have all the objects in memory, that's quite difficult because people tend to visit mostly only a subset of your content.
fifth, if you are in Plone 4.x test DateTime 3.0.3 (on a staging server before put it in production) this could decrease further your memory consumption by up to 40% (somebody told me it now works also in Plone 3.x, but I haven't check it my self).
sixth, share your result on the Plone setup list!
good night and good luck!
A 32-bit platform -- don't know if this is limited to Intel -- is limited to 3GB per process. That is because it can only address up to 4GB per process, and the bottom 1GB is used by the kernel. Of course PAE allows you to access up to 64GB, but there are certain per process limitations which you are running into here. You really cannot run a high-traffic plone site on a 32-bit platform anymore. Quite often the simplest solution is to upgrade your OS to the 64-bit version, because unless you have seriously ancient hardware, it should already be capable of running x86-64.

How does BitLocker affect performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think.
So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad?
My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).
2015 Update: I've been using Visual Studio 2015 on my Surface Pro 3 when I travel, which has BitLocker enabled by default. It feels pretty much like my desktop, which is an i7-2600k # 4.6 GHz. I think on modern hardware with a good SSD, you won't notice!
2021 Update: I have been enabling bitlocker on all my computers and it flies now. No worries. Get an NVMe SSD and don't look back.
With my T7300 2.0GHz and Kingston V100 64gb SSD the results are
Bitlocker off → on
Sequential read 243 MB/s → 140 MB/s
Sequential write 74.5 MB/s → 51 MB/s
Random read 176 MB/s → 100 MB/s
Random write, and the 4KB speeds are almost identical.
Clearly the processor is the bottleneck in this case. In real life usage however boot time is about the same, cold launch of Opera 11.5 with 79 tabs remained the same 4 seconds all tabs loaded from cache.
A small build in VS2010 took 2 seconds in both situations. Larger build took 2 seconds vs 5 from before. These are ballpark because I'm looking at my watch hand.
I guess it all depends on the combination of processor, ram, and ssd vs hdd. In my case the processor has no hardware AES so compilation is worst case scenario, needing cycles for both assembly and crypto.
A newer system with Sandy Bridge would probably make better use of a Bitlocker enabled SDD in a development environment.
Personally I'm keeping Bitlocker enabled despite the performance hit because I travel often. It took less than an hour to toggle Bitlocker on/off so maybe you could just turn it on when you are traveling then disable it afterwards.
Thinkpad X61, Windows 7 SP1
Some practical tests...
Dell Latitude E7440
Intel Core i7-4600U
16.0 GB
Windows 8.1 Professional
LiteOn IT LMT-256M6M MSATA 256GB
This test is using a system partition. Results for a non-system partition are a bit better.
Score decrease:
Read: 5%
Write: 16%
Without BitLocker:
With BitLocker:
So you can see that with a very strong configuration and a modern SSD disk you can see a small performance degradation with tests. I don't know what about a typical work, especially with the Visual Studio.
Having used a laptop with BitLocker enabled for almost 2 years now with more or less similar specs (although without the SSD unfortunately), I can say that it really isn't that bad, or even noticable. Although I have not used this particular machine without BitLocker enabled, it really does not feel sluggish at all when compared to my desktop machine (dual core, 16 GB, dual Raptor disks, no BitLocker). Building large projects might take a bit longer, but not enough to notice.
To back this up with more non-scientifical "proof": many of my co-workers used their machines intensively without BitLocker before I joined the company (it became mandatory to use it around the time I joined, even though I am pretty sure the two events are totally unrelated), and they have not experienced noticable performance degradation either.
For me personally, having an "always on" solution like BitLocker beats manual steps for encryption, hands-down. Bitlocker-to-go (new on Windows 7) for USB devices on the other hand is simply too annoying to work with, since you cannot easily exchange information with non-W7 machines. Therefore I use TrueCrypt for removable media.
I am talking here from a theoretical point of view; I have not tried BitLocker.
BitLocker uses AES encryption with a 128-bit key. On a Core2 machine, clocked at 2.53 GHz, encryption speed should be about 110 MB/s, using one core. The two cores could process about 220 MB/s, assuming perfect data transfer and core synchronization with no overhead, and that nothing requires the CPU in the same time (that one hell of an assumption, actually). The X25-M G2 is announced at 250 MB/s read bandwidth (that's what the specs say), so, in "ideal" conditions, BitLocker necessarily involves a bit of a slowdown.
However read bandwidth is not that important. It matters when you copy huge files, which is not something that you do very often. In everyday work, access time is much more important: as a developer, you create, write, read and delete many files, but they are all small (most of them are much smaller than one megabyte). This is what makes SSD "snappy". Encryption does not impact access time. So my guess is that any performance degradation will be negligible(*).
(*) Here I assume that Microsoft's developers did their job properly.
The difference is substantial for many applications. If you are currently constrained by storage throughput, particularly when reading data, BitLocker will slow you down.
It would be useful to compare with other software based whole disk or whole partition encryption like TrueCrypt (which has the advantage if you dual boot with Linux since it works for both Windows and Linux).
A much better option is to use hardware encryption, which is available in many SSDs as well as in Hitachi 7200 RPM HDD. The performance of encrypted v. not is undetectable, and the encryption is invisible to operating systems. If you have a decent laptop, you can use the built-in security functions to generate and store the key, which your password unlocks from the encrypted key storage of the laptop.
I used to use the PGP disk encryption product on a laptop (and ran NTFS compressed on top of that!). It didn't seem to have much effect if the amount of disk to be read was small; and most software sources aren't huge by disk standards.
You have lots of RAM and pretty fast processors. I spent most of my time thinking,
typing or debugging.
I wouldn't worry very much about it.
My current work machine came with bitlocker, and being an upgrade from the prior model. It only seemed faster to me. What I have found, however, is that bitlocker is more bullet proof than truecrypt, when it comes to accurately laying down the data. I do a lot of work in SAS which constantly writes backup copies to disk as it moves along and shoots a variety of output types to disk at the end. SAS works fine writing output from multithreaded processes back to bitlocker and doesn't seem to know it's there. This has not been the case for me with truecrypt. I'm not sure what happens or how, but I found that processes got out of synch when working with source/output data in a truecrypt container, which is what I installed on my second work computer since it had no bitlocker. The constant backups were shooting to an SSD while the truecrypt results were on a regular HD. Maybe that speed difference helped trip it up. Whatever the cause, I had to quit using truecrypt on that second computer because it made my SAS results out of synch with respect to processing order and it screwed up some of my processes and data. Scary stuff in my world.
I work with people who have successfully used Truecrypt on the exact same computer, but they weren't using a disk intensive app. like SAS.
Bitlocker to Go, the encryption which bitlocker applies to thumb-drives, does slow things down quite a bit when it comes to read/write times. It's not too hard to use as long as you remember your password on the thumbdrive, and are willing to wait for it to format/initialize the drive, but in my experience it made access to the flash drive about 4 times as slow. Don't know why it would slow down a thumb drive and not a disk but that's how it was for me and my coworker.
Based on my success with bitlocker at work, I bought Windows Pro for my home computer to get bitlocker and plan to encrypt some directories with it for things like financials.

ASP.Net: Finding the cause of OutOfMemoryExpcetions

I trying to track down the cause of an OutOfMemory for a website. This site has ~12,000 .aspx pages and the last time it crashed I captured a memory dump using adplus.
After some investigation I found a lot of heap fragmentation, there are around 100MB of Free blocks which can't be assigned. Digging deeper one of the Large Object Heaps is fragmented and the causes seems to be String interning as described [here][1]
Could this be caused by the number of pages in the site? As they are all compiled they sit in memory and by looking at the dump they are interned and PINNED which I think means they stick around for a while.
I would find this odd as there are many sites with more pages, but dynamic compilation could account for the growth in memory.
What other methods are there for finding the cause of the memory leak? I have tried to capture a dump using adplus in hang mode but this fails and the IIS worker process get recycled.
[1]: • Large Object Heap Fragmentation
Yep fragmentation is one of the reason because it will be more difficult to 'find' memory.
Some very known clues :
On a 32bit OS, your worker process can't exceed 2 GB in memory even if your system have more. An alternative is the /3GB switch (boot.ini file).
The likelihood of experiencing an OutOfMemoryException begins to increase dramatically when “Process\Virtual Bytes” is within 600 MB of the virtual address space limit (generally 2 GB), and secondly, tests have shown that “Process\Virtual Bytes” is often larger than “Process\Private Bytes” by no more than 600 MB
The CLR is allocating and handling the memory by blocks (64 MB from memory;)). It does not mean it use all this memory, it means one block will not be freed until one byte is used and it's how the memory become fragmented.
When the CLR can't allocate memory (new block needed), and it can't garbage collect or page some of the memory, OutOfMemoryException is going to happen in the best case or the server will be overloaded in the worst case.
About dynamic compilation, that's true but the most important is the <compilation debug="false" /> / <deployment retail=”true”/> switches. They turn on batch compilation which mean one DLL by directory (this is another point) and not one DLL per 'page' which cause virtual space fragmentation.
About one DLL by directory, even with batch compilation, more directories/sub-directories you have, more DLL you will have and more fragmented virtual address space you will have.
Personally, I use ANTS Memory Profiler to find memory issues.
There is a really good blog here by Tess Ferrandez that talks about memory issues and other types of debugging issues while developing ASP.NET applications.
One post in particular walks you through a tutorial to see if you have a leak. Its uses the nasty WinDBG, but she explains all the commands so you can get a clear picture of your used memory and it shows you exactly which objects are filling all the space.
I have used her site many times to diagnose memory leaks and other performance related problems.
I have also used tools like DebugDiag to help capture memory related issues and used its built in diagnostic tools to help find the problems.

What are the pros and cons of running IIS as 32bit vs 64bit on a 64bit OS?

Possibly better suited for "Rack Overflow", but from a developer's point of view, what are the advantages and disadvantages of running IIS (serving both legacy classic ASP and .NET) as a 32bit process instead of a 64bit process on a 64bit windows host?
The main advantage of 32/64 (iis/server) over 32/32 seems to be the ability to go up to 4gb in memory per IIS process.
The advantages I expect of 32/64 over 64/64 appear to be that it's easier to access legacy 32-bit in-process DLLs (of which we still have one from a partner vendor we can't move away from immediately) and perhaps a smaller memory footprint for the same code given smaller memory pointers.
Are there any performance benefits of 64/64 over 32/64 or anything else that would warrant a full switch now? Have I made any false assumptions here?
The only perf advantage to running IIS on 64bit vevrsus 32-bit is to allow access to a much larger memory address space.
If you are doing normal ASPX page processing, then it's likely you don't need to address more than 4gb from any single process. Suppose you run in 32-bit mode with a web-garden with multiple worker processes on the same machine. In that case each process can address up to 4gb.
The big advantage can come when you perform caching. A 64-bit process can maintain a huge in-memory cache (assuming you have the 32GB or more of RAM to support it) to allow you to cache complex page content or data, on the web server. This allows perf gains when the data is more expensive to generate than it is to retrieve - for example if the data is an elaborated form (let's say the result of a monte carlo simulation), or if the data resides off-box and the network IO time is much more expensive than cache-retrieval time.
If you do not use caching, then 64-bit IIS is not going to help you. It will require 64-bit pointers for every lookup, which will make everything a little slower.
64-bit servers are much more effective when used for databases like SQL Server, or other data management servers (let's say, an enterprise email server like Exchange), than for processing servers, such as IIS or the worker processes it manages. With a 64-bit address space, servers that need to manage data can keep much more of that data in memory, along with indexes and other caches. This saves disk IO time and elaboration time when a query comes in. Most Web apps don't need to address more than 4gb from a single process.
Maybe a useful analogy: In transport, an large SUV is like a 64-bit machine, while a regular, compact passenger car is like a 32-bit server. You can carry much more stuff in a large SUV, and it has a larger towing capacity, seating for 8 people, and a GVWR of 8600 lbs. But with all that, you pay. The truck is heavier. It uses more fuel. If you are only carting around 2 people and one duffel bag, you don't need an SUV. You'll be better off with the smaller vehicle. It can be speedier and more efficient.
I don't think you've made any false assumptions. But I'd say, no, there's likely to be no performance difference between any of the scenarios you outlined. 32 on 64 on Windows does not operate at a penalty. 64 on 64 may give some slight performance boost, but that's dubious. There may be some memory savings with a 32-bit process, but this is likely negated by the thunking required to run the process in the first place.
The only benefit is the DLL issue you mentioned. That could be a reason for upgrading as well (if you have something specifically 64-bit that you need to use).
I've had an experience where moving from a 32bit Windows 2003 Server to a 64bit Windows 2003 Server both running IIS 6 and the performance of the ASP.NET 3.5 website was unacceptable.
The 64bit server would run a clear 2 seconds behind the 32bit one consistently.
After switching IIS 6 to run as a 32bit worker process, the performance was equal and comparable once again.
I haven't verified it, but I think it might only apply to IIS6 win2k3, as testing I've done with IIS7 x64 (Vista) and a 64bit IIS worker process seems to perform just fine.
The process to swap to the 32bit process was quite simple. Here is the KB article with the supporting details:
http://support.microsoft.com/kb/894435/en-us
ASP.NET 2.0, 32-bit version
To run the 32-bit version of ASP.NET 2.0, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command to enable the 32-bit mode:
cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1
Type the following command to install the version of ASP.NET 2.0 (32-bit) and to install the script maps at the IIS root and under:
%SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i
Make sure that the status of ASP.NET version 2.0.50727 (32-bit) is set to Allowed in the Web service extension list in Internet Information Services Manager.
See the KB article for setting back to 64-bit.
For memory availability, refer to this msdn blog.
Memory availability. For my application, we got what we needed switching from 32 bit process on 32 bit OS to 32 bit process on 64 bit OS, without the trouble of replacing 3rd party libraries. So, we stopped there. Benefits are: 1) 2-3x effective memory available to each IIS worker process and 2) In a 32 bit OS where the web site uses a lot of memory, other system processes and web sites compete for limited total memory. For your application, look at how much memory do your worker processes use. If each WP isn't using a lot of memory (well over 1GB), 64 bit worker processes won't help much.
For performance, I think you have to test your own applications in both configurations. Dave's post above indicates that you might have performance degradation with 64 bit. As cheeso notes, some applications may see benefits from caching (2GB + of cache is a lot, though). Except for limited and simple applications, I don't think we are going to be able to make performance generalizations. We might be able to point to specific technologies that perform better or worse.
Besides the obvious memory differences, 32 bit processes on a 64 bit OS have to run in something called "Windows on Windows" or WOW mode. It's basically a thunking/emulation layer. There is a performance penalty if you pay close enough attention.
This is actual advice from Microsoft: "We recommend that you configure IIS to use a 32-bit worker processes on 64-bit Windows. Not only its compatibility better than the native 64-bit, performance and memory consumption are also better."
Please refer to this link posted in one of the comments above and published 05/14/2020:
https://learn.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/32-bit-mode-worker-processes
I cannot claim to understand exactly why, but this advice is very clear, with 64bit workers virtual address space is bigger so a 32bit worker is generally more efficient

Resources