Windows Web Server 2008 R2 (x64) &
.Net Framework 4.5
It is a classic ASP.Net Web Site (Not a web project, code is in App_Code directory and compiled when the site is being launched)
And it depends on many reference DLLs in /Bin directory.
For those DLLs I have source code, I compile them targeted as "x64" platform.
And I have some other DLLs without source code (mysql.data.dll / etc), which are compiled as "Any CPU".
I modified them in EditBin.exe to ensure the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is indicated in their PE header.
According to this table:
http://msdn.microsoft.com/en-us/library/aa366778%28VS.85%29.aspx#memory_limits
x64 process can't use more than 2GB memory unless IMAGE_FILE_LARGE_ADDRESS_AWARE is set.
How can I verify whether it works?
Is there any place I can see the memory limitation of running x64 process?
I don't know if you can actually "see the memory limitation" explicitly stated (assuming you don't trust MS's own documentation that you cited) other than getting your hands on and digging into the IIS and/or ASP.NET source code.
That being said, you could try stress testing the site, and monitor memory consumption (via Task Manager or Process Monitor), to see if it exceeds 2GB. I would recommend tinyget, part of the IIS 6 Resource Kit, which can still be used with IIS 7.
tinyget -svr:localhost -uri:/<your site> -loop:200 -threads:20
You'll have to play with the loop and thread count to try to push it over 2GB. I would expect to see an System.OutOfMemoryException as you approach about 1.4GB of combined physical and virtual private bytes. You may want to create a stress test function in the site itself, for testing purposes only of course, which will help you reach this limit, by using the exact opposite of good practices. You can read more about what would lead to an System.OutOfMemoryException` here, and then do things they recommend against. For example, add a test method that just concatenates strings in a very large loop.
try procexp, from sysinternals here. This application can monitor .NET specific metrics.
Nevertheless, according to your link, you should be able to address at least 8GB.
Please keep in mind that enforcing the IMAGE_FILE_LARGE_ADDRESS_AWARE is irrelevant in your case. You could have all your components compiled to target "Any CPU", the only flag that is checked is the flag of the executable file.
Related
I am a C# winforms programmer, not used to ASP.Net.
As a winforms programmer I build regularly to detect syntax errors.
Recently I opened a Kentico website in Visual Studio and to my surprise found that there were build errors.
Does this matter?
My instinct is to go about correcting the site until it builds. This is a side track from what I set out to do.
If you are attempting to build any kind of quality into your project/software, then yes, it does matter if it builds.
Regarding Kentico and build times, if you're using a website vs. a web project, yes the build times are typically longer and range anywhere from a few minutes to I've seen upwards of an hour. The build times depend greatly on the machine building it as well. So if your machine has a Celeron processor, with 1GB of RAM and a 5400 RPM drive, you're going to take longer to build than a machine with an i7 processor, 16GB of RAM and a solid state hard drive that can read/write 500+ MB/s. Also keep in mind Kentico out of the box has over 9000 system files in it so as a website, it will take some time to build.
One of the first things I check when a site doesn't build is to ensure all the referenced DLLs are in the website/project. If not, this will cause several errors and is usually a very simple fix. If you have any kind of errors from code which resides in the /App_Code directory, your site will NOT run at all when you publish it. If you have errors within any other directory, the site will run BUT wherever those code files are referenced on the website, will display errors. So in your instance if you have webpart files in the /CMSWebparts/OurCompany folder, if those webparts are placed on pages within the website, those pages will error out even though the rest of the site is running.
In my opinion, just fix the errors and be done with them. Then check the code into a version control system to keep track of the changes.
Does this matter?
It depends on what you are trying to achieve with your website. If you want to make it available to the public then building is definitely something you should consider as top priority. If on the other hand you want to have the source code open in Visual Studio on your local machine, just for reading purposes, then building is not necessary.
I have created a memory dump of an ASP.NET process on a server using the following command: .dump /ma mydump.dmp. I am trying to identify a memory leak.
I want to look at the dump file in more detail on my local development PC. I read somewhere that it is advisable to debug on the same machine as you create the dump file. However, I have also read that some developers do analyse the dump file on their local development PC's. What is the best approach?
I notice that when I create a dump file using the command above the W3WP process memory increases by about 1.5 times. Why this this? I suppose this should be avoided on a live server.
Analyzing on the same machine can save you from SOS loading issues thereafter. Unless you are familiar with WinDbg and SOS, you will find it confusing and frustrating then.
If you have to use another machine for analysis, make sure you read carefully this blog post, http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx as it shows you how to copy the necessary files from the source machine (where the dump is captured) to the target machine (the one you launch WinDbg).
For your second question, as you use WinDbg to attach to the process directly, and use .dump command to capture the dump, the target process unfortunately is modified. Not easy to explain in a few words. The recommended way is to use ADPlus.exe or Debug Diag. Even procdump from SysInternals is better. Those tools are designed for dump capture and they have minimal impact on the target processes.
For memory leak from unmanaged libraries, you should use memory leak rule of Debug Diag. for managed memory leak, you can simply capture hang dumps when memory usage is high.
I am no expert on WinDBG but I once had to analyse a dump file on my ASP.NET site to find a StackOverflowException.
While I got a dump file of my live site (I had no choice since that was what was failing), originally I tried to analyse that dump file on my local dev PC but ran into problems when trying to load the CLR data from it. The reason being that the exact version of the .NET framework differed between my dev PC and the server - both were .NET 4 but I imagine my dev PC had some cumulative updates installed that the server did not. The SOS module simply refused to load because of this discrepancy. I actually wrote a blog post about my findings.
So to answer part of your question it may be that you have no choice but to run WinDBG from your server, at least you can be sure that the dump file will match your environment.
It is not necessary to debug on the actual machine unless the problem is difficult to manifest on your development machine.
So long as you have the pdbs with the private symbols then the symbols should be resolved and call stacks correctly displayed and the correct version of .NET installed.
In terms of looking at memory leaks you should enable Gflags user stack trace and take memory dumps at 2 intervals so you can compare the memory usage before and after the action that provokes the memory leak, remember to disable gflags afterwards!
You could also run DebugDiag on the server which has automated memory pressure analysis scripts that will work with .Net leaks.
I'm supporting a ASP.NET v2.0 app installed on a Windows 2003 SP3 Enterprise on a quad core 8G machine running on .NET 2.0 SP1.
before enabling the config, ran "tasklist /m mscorwks.dll"
Image Name PID Modules
w3wp.exe 7888 mscorwks.dll
add under section in web.config
ran IISRESET, rebooted server too
ran "tasklist /m mscorsvr.dll"
INFO: No tasks are running which match the specified criteria.
ran "tasklist /m mscorwks.dll"
Image Name PID Modules
w3wp.exe 6251 mscorwks.dll
It seems like gcServer is not taking effect. Are there any additional settings/ configurations necessary to get it working?
Update: Sorry, just saw that the link below, and thus maybe the whole information, applies to IIS 6.0. I don't know whether that is applicable to your environment.
I don't believe you can configure any GC setting on a per AppDomain basis, which is essentially what would happen when you only set it in a web.config file, thus on a per application basis.
You need to set this in the aspnet.config file. The Aspnet.config file is in the same directory as the Aspnet_isapi.dll file (check this for more information).
Edit: To figure out the GC in use, you can use WinDBG/SOS and the eeversion command:
0:010> !eeversion
2.0.50727.3082 retail
Workstation mode
SOS Version: 2.0.50727.3053 retail build
See this MSDN link, where Chapter 5 had the answer. Quote from Chapter 5:
Note: At the time of this writing, the .NET Framework 2.0 (code-named "Whidbey") includes both GCs inside Mscorwks.dll, and Mscorsvr.dll no longer exists.
I guess there is no way to check whether the server GC is working. EDIT: but see Christian's answer.
From code you can use GCSettings.IsServerGC.
Whats the advantage of pre-compiling an ASP.NET project using aspnet_compiler.exe?
Also, is there a possibility that a the compiled project will have errors after it is being deployed to a remote server? There is an issue raised by the team that maybe if the machine where the project is compiled and the server where the project will be deployed will have different settings, the project will not run or run erroneously. Has anybody encountered this situation?
I see two principle reasons:
Compile time error checking.
Avoiding overhead of compile on first page fit.
The first of these allows more errors to be detected while in development (e.g. typo in property name) rather than getting the yellow screen of death. If the code in question is an error path, it can be hard to ensure test coverage so things can slip through.
This cannot guarantee that there will be no errors in production. Clearly logic errors will not be found at compile time, nor will missing error handling (to name but two huge categories of bugs).
Also it will not prevent missing reference problems due to a missing assembly (present on the development machine but not deployed to production). Thus good practice is still to have a staging environment (this could also be for acceptance testing) which is treated by developers and testers as if it were production --- only access is to deploy a corrected version (no direct fixing) so fixes all start in development (and source control).
Another advantage is protecting your source code if you have to deploy to an untrusted environment (e.g. shared hosting), as you only deploy binaries which make it much harder for a third party to do any reverse-engineering than if you have all your .cs files up there.
You won't see any cross-platform issues just from performing the compilation. Check out this answer which explains how .NET runs on different architectures.
I have an ASP.NET 3.5 website running under IIS7 on Windows 2008.
When I restart IIS (iisreset), then hit a page, the initial startup is really slow.
I see the following activity in Process Explorer:
w3wp.exe spawns, but shows 0% CPU
activity for about 60 seconds
Finally, w3wp.exe goes to 50% CPU for
about 5 seconds and then the page
loads.
I don't see any other processes using CPU during this time either. It basically just hangs.
What's going on during all that time? How can I track down what is taking all this time?
We had a similar problem and it turned out to be Windows timing out checking for the revocation of signing certificates. Check to see if your server is trying to call out somewhere (e.g. crl.microsoft.com). Perhaps you have a proxy setting incorrect? Or a firewall in the way? We ultimately determined we had enough control over the server and did not want to 'call home', so we simply disabled the check. You can do this with .NET 2.0 SP1 and later by adding the following to the machine.config.
<runtime> <generatePublisherEvidence enabled="false"/> </runtime>
I am not sure if you can just put this in your app.config/web.config.
IL is being converted into machine native code (Assembly) by the Just-In-Time compiler and you get to wait while all the magic happens.
When compiling the source code to
managed code, the compiler translates
the source into Microsoft intermediate
language (MSIL). This is a
CPU-independent set of instructions
that can efficiently be converted to
native code. Microsoft intermediate
language (MSIL) is a translation used
as the output of a number of
compilers. It is the input to a
just-in-time (JIT) compiler. The
Common Language Runtime includes a JIT
compiler for the conversion of MSIL to
native code.
Before Microsoft Intermediate Language
(MSIL) can be executed it, must be
converted by the .NET Framework
just-in-time (JIT) compiler to native
code. This is CPU-specific code that
runs on the same computer architecture
as the JIT compiler. Rather than using
time and memory to convert all of the
MSIL in a portable executable (PE)
file to native code. It converts the
MSIL as needed whilst executing, then
caches the resulting native code so
its accessible for any subsequent
calls.
source
Thats the compilation of asp.Net pages into intermediate language + JIT compilation - it only happens the first time the page is loaded. (See http://msdn.microsoft.com/en-us/library/ms366723.aspx)
If it really bothers you then you can stop it from happening by pre-compiling your site.
EDIT: Just re-read the question - 60 seconds is very long, and you would expect to see some processor activity during that time. Check the EventLog for errors / messages in the System and Application destinations. Also try creating a crash dump of the w3wp process during this 60 seconds - there is an chance you might recognise what its doing by looking at some of the call stacks.
If it takes exactly 60 seconds each time then its likely that its waiting for something to time out - 60 seconds is a nice round number. Make sure that it has proper connections to the domain controllers etc...
(If there are some IIS diagnostic tools that would do a better job then I'm afraid I'm not aware of them, this question might be more suited to ServerFault, the above is a much more developer-ish approach to troubleshooting :-p)
I found that there was a network delay making an initial connection from the front end web server to the database server.
The issue was peculiar to Windows 2008 and our specific network hardware.
The resolution was to disable the following on the web servers:
Chimney offload state
Receive window auto-tuning level
Greater than 60 seconds sounds fishy. Try running a test.html page to see how long that takes. That will isolate IIS7's role.
Then temporarily rename your web.config, global.asax and application folders and try a test.aspx page (very simple page). That will isolate ASP.NET.
If both of those are fast (i.e. about 10 seconds), then it's your application. But, if either are slow then not the application and something with the server itself.
This hat nothing to do with JIT compiling. The normal C# compiler compiles your code behind files (.aspx.cs) into intermediate language into an assembly at startup if this assembly dont exist or code files have changed. Your web site assembly is located in the "bin" folder of your web site.
In fact the JIT compiling occures after that, but this is very fast and won't take several minutes. JIT Compiling happens on every startup of an .net application and that won't take more than a view seconds.
You can avoid the copmpiling of your web site if you deploy the already compiled website assembly (YourWebsite.dll) into the bin folder. It is also possible to deploy only the aspx files and leave the code behind files (aspx.cs) files away.
I've just been battling a similar issue. For me it turned out to be that I had enabled internal logging for NLog. It added about 3 minutes to the startup time!
Original config
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
throwExceptions="false" throwConfigExceptions="false"
internalLogLevel="Debug"
internalLogFile="C:\Temp\NLog.Internal.txt">
Fixed Config
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
throwExceptions="false" throwConfigExceptions="false">
For Info I discovered this by using SysInternals' ProcMon.exe, filtering on the Process Name "w3wp.exe"