ASP.NET Website DLL: Debug vs. Release version - asp.net

When uploading my ASP.NET web application .dll file to my website's /bin/ directory, are there any disadvantages to using the debug version as opposed to recompiling a release build.
For example, while working locally on the website, the build configuration is set to Debug. When things look good, I go ahead and upload the latest .dll for the website/webapp. Should I instead at that point switch the build configuration to Release, then compile, and then upload that version of the .dll to the server?
I hoping this question isn't a duplicate. I read a number of other questions with similar words in the subject, but didn't find something directly related to my question.
Thanks,
Adam

Running with debug assemblies is a little heavier on performance, as they take up more memory, but usually not extremely so. You should only deploy a release build when it's really a "release". If you still anticipate some level of unexpected behavior in the wild, I'd consider using debug assemblies, so you will get more useful information from unhandled exceptions. The real performance "gotcha" is having debug="true" in your web.config.

A lot of it depends on what your individual needs are. In general people frown on putting the debug build into production for performance reasons. The emitted debug code is apparently not optimized and contains debug symbols that can slow down the execution of the code.
On the other hand, I've worked places where the policy was to put debug builds in production because it makes it easy to see line number, etc, when the code throws exceptions. I'm not saying I agree with this position, but I've seen people do it.
Scott Hanselman has a good post on doing a hybrid version of Debug and Release that could get you the best of both worlds here.

If you have a low volume website, you will never see the performance penalty of a Debug assembly in any measurable way. If you have high volume, look into other means of logging/instrumenting code instead.
On a high-volume website, you DO need to perform extensive stress and load testing to try very hard to break the application before it goes into production. I would do the first pass of that testing with Debug assemblies (since you probably WILL break stuff, and it will make it easier to see where). Then, repeat with the Release assemblies to make sure they behave the same way as the Debug ones.

http://weblogs.asp.net/scottgu/archive/2006/04/11/Don_1920_t-run-production-ASP.NET-Applications-with-debug_3D001D20_true_1D20_-enabled.aspx

Very few applications are going to see a significant difference in performance between release and debug builds. If you're running a small to medium sized application and you think there might be any bugs you haven't caught, use the debug build.

Related

Do you have any suggestions on how to speed up precompiling a Kentico site? AKA: the aspnet_compiler is VERY slow

I have a Kentico 6 site that we have squeezed better performance out of by precompiling. We have a Team City Continuous Integration (CI) server that runs the build automatically on check in to the subversion repository.
However, the build takes 40 min! About 33 min are during the aspnet_compiler step! This is a pretty long time for a build that SHOULD take less than 10 min. I've gone through some of the various perf improvements such as moving the asp.temp files to a fast SSD. During the aspnet_compiler step, the server is using very little CPU, Disk and RAM. The CPU use seems to avg about 1%! In researching about aspnet_compiler, I have found many pages complaining about the speed, but a dramatic silence from Microsoft about it.
https://aspnet.uservoice.com/forums/41199-general-asp-net/suggestions/4417181-speed-up-the-aspnet-compiler http://programminglife.wordpress.com/2009/04/16/aspnet_compiler-compilation-speed-part-1/
I've come to (perhaps erroneous) conclusion that if I can't speed up the aspnet_compiler, then perhaps I can reduce it's workload. Since Kentico has lots of controls in the project, are there any savings that I can glean by remove extraneous controls? ie: if I run the aspnet_compiler for a project with a single page, it's quick.
I've also thought about maybe making the cmsdesk a separate application that only needs to be compiled after a new hotfix has been applied.
To recap: My two concerns are: #1 - can I speed up the aspnet_compiler somehow? #2 - If I can't, then I'm guessing I can reduce it's workload.
In relation to #1, maybe I can do incremental compilation, so that I only precompile files that have changed since the last build? I haven't found much info about doing this; there are a few unanswered questions on StackOverflow about this very topic - eg: aspnet_compiler incremental precompile Incremental Build aspnet_compiler
FYI - for those of you unfamiliar with Kentico CMS, It's a Web Site Project, with LOTS of controls - maybe hundreds of them.
Any ideas?
PS - I have a reply on the Kentico forums: http://devnet.kentico.com/questions/do-you-have-any-suggestions-on-how-to-speed-up-precompiling-a-kentico-site-aka-the-aspnet_compiler-is-really-slow
One thing our team did try to improve the compilation time was to remove the modules and set of controls that were not necessary to the project(forum, ecommerce etc..) to speed up the compilation time.
Also, which approach do you use for developement, portals or aspx? as we've noticed that compilation prove to be faster in the portal approach than in the ASPX .
Everything I've read says that precompiling with aspnet_compiler is super difficult to achieve with Kentico.
However, I managed to find this post on Kentico's forums where someone appears to have figured out a way to make your system work for them (search for EHUGGINS-PINNACLEOFINDIANA on the page and you'll find it).
I have heard that using MSDeploy or MSBuild are a much more efficient way to precompile than directly calling aspnet_compiler (although I'm pretty sure both of those either still call aspnet_compiler or do the same thing it does, only faster).
I've personally never tried using any of these methods (and I'm pretty new to ASP.NET myself), but I figured I could at least give you some leads:
http://odetocode.com/blogs/scott/archive/2006/10/18/what-can-aspnet_compiler-exe-do-for-me.aspx
http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_26.html

Advantages of a build server?

I am attempting to convince my colleagues to start using a build server and automated building for our Silverlight application. I have justified it on the grounds that we will catch integration errors more quickly, and will also always have a working dev copy of the system with the latest changes. But some still don't get it.
What are the most significant advantages of using a Build Server for your project?
There are more advantages than just finding compile errors earlier (which is significant):
Produce a full clean build for each check-in (or daily or however it's configured)
Produce consistent builds that are less likely to have just worked due to left-over artifacts from a previous build
Provide a history of which change actually broke a build
Provide a good mechanism for automating other related processes (like deploy to test computers)
Continuous integration reveals any problems in the big picture, as different teams/developers work in different parts of the code/application/system
Unit and integration tests ran with the each build go even deeper and expose problems that would maybe not be seen on the developer's workstation
Free coffees/candy/beer. When someone breaks the build, he/she makes it up for the other team members...
I think if you can convince your team members that there WILL be errors and integration problems that are not exposed during the development time, that should be enough.
And of course, you can tell them that the team will look ancient in the modern world if you don't run continuous builds :)
See Continuous Integration: Benefits of Continuous Integration :
On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk. My mind still floats back to that early software project I mentioned in my first paragraph. There they were at the end (they hoped) of a long project, yet with no real idea of how long it would be before they were done.
...
As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process. However I should stress that the degree of this benefit is directly tied to how good your test suite is. You should find that it's not too difficult to build a test suite that makes a noticeable difference. Usually, however, it takes a while before a team really gets to the low level of bugs that they have the potential to reach. Getting there means constantly working on and improving your tests.
If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.
From my personal experience, setting up a build server and implementing CI process, really changes the way the project is conducted. The act of producing a build becomes an uneventful everyday thing, because you literally do it every day. This allows you to catch things earlier and be more agile.
Also note that setting build server is only a part of the CI process, which includes setting up tests and ultimately automating the deployment (very useful).
Another side-effect benefit that often doen't get mentioned is that CI tools like CruiseControl.NET becomes the central issuer of all version numbers for all branches, including internal RCs. You could then enforce your team to always ship a build that came out of the CI tool, even if it's a custom version of the product.
Early warning of broken or incompatible code means that all conflicts are identified asap, thereby avoiding last minute chaos on the release date.
When your boss says "I need a copy of the latest code ASAP" you can get it to them in < 5 minutes.
You can make the build available to internal testers easily, and when they report a bug they can easily tell you "it was the April 01 nightly build" so that you can work with the same version of the source code.
You'll be sure that you have an automated way to build the code that doesn't rely on libraries / environment variables / scripts / etc. that are set up in developers' environments but hard to replicate by others who want to work with the code.
We have found the automatic VCS tagging of the exact code that produce a version very helpful in going back to a specific version to replicate an issue.
Integration is a blind spot
Integration often doesn't get any respect - "we just throw the binaries into an installer thingie". If ithis doesn't work, it's the installers fault.
Stable Build Environment
Prevents excuses such as "This error sometimes occurs when built on Joe's machine". Prevents using old dependent libraries accidentally when building on Mikes machine.
True dogfooding
You inhouse testers and users have a true customer experience. Your developers have a clear reference for reproducing errors.
My manager told us we needed to set them up for two major reasons. None were really to do with the final project but to make sure what is checked in or worked on is correct.
First to clean up DLL Hell. When someone builds on their local machine they can be pointing at any reference folder. Lots of projects were getting built with the wrong versions of dlls from someone not updating their local folder. In the build server it will always be built of the same source. All you have to do is get latest to get the latest references.
The second major thing for us was a way to support projects with little knowledge of them. Any developer can go grab the source and do a minor fix if required. They don't have to mess with hours of set up or finding references. We have an overseas team that works primarily on a project but if there is a rush fix we need to do during US hours we can grab latest and be able to build not have to worry about broken source or what didn't get checked in. Gated checkins save everyone else on your team time.

Opinions on MSDeploy

You know, the next "big" and "enterprisey" thing from Microsoft.
Is it just me, or is it really hardly for humans? Main highlights are (IMO):
Absolutely cryptic syntax (-skip:objectName=filePath,absolutePath=App_Offline.* just for skipping App_Offline.html)
Manifest as an after thought
Lack of thorough documentation
Not a word about extensibility (except for several blog posts out there). Moreover, all these extensions developed in great pains have to be registered in GAC and registry
Waaay too low-level (metadata/metakey; all this IIS jazz)
No integration with MSBuild
Granted, MSDeploy and MSDeployAgent are quite powerful, but do they really need to be that complex for relatively simple tasks?
I too share your frustrations over the lack of documentation and the apparent low-level nature of this tool.
However what MS has done is finally create a free tool with which you can actually script whole server deployments, including parameterising addresses, configurations etc. This is unfortunately a very complicated thing to do - given how many bits of configuration actually go into a web server - and this is probably the best way to do it all.
What we need now is a really good GUI that can help build up these packages, and scripts etc. The GUI that is embedded within IIS is good - but again, short on explanation - so hopefully soon that'll be addressed.
On the functional side, I'm using at the moment to deploy a site from dev -> staging -> live with parameters to change bound IP addresses etc. I was deeply frustrated that it took me a few days to get it all working - however now I have it, I can remove a lot of the possibly of human error at the IT Support side - who are responsible for our deployments. I now only have the configuration of my master staging server to worry about - and can be sure that all the servers in the web farm will be kept in sync whenever I deploy.
As Sayed mentions, as well, there are MSBuild tasks in 2010 (the Website Deployment feature is now implemented using msdeploy) to work with this - which also brings the possibility of a true Continuous Integration environment to VSTeamSystem - having a team build that can actually perform a full web deployment as its last step is very exciting (and scary, granted!).
Actually there are MSBuild tasks for MSDeploy. They will be shipped with .NET 4/Visual Studio 2010.
Although a bit rough around the edges, I've come to like MSDeploy quite a bit. Using it to sync web servers in a farm is very useful as it is efficient (only copies changes) and takes care of actual IIS settings in addition to content files. It seems like MSDeploy is a building block for various scenarios and uses. Also, as previously mentioned, there is a MSBuild task for MSDeploy in .NET 4. I've taken advantage of this MSBuild task to make deployment of my web applications from TeamCity trivially easy. I've blogged here it here:
Web Deploy (MS Deploy) from TeamCity - http://www.geekytidbits.com/web-deploy-ms-deploy-from-teamcity/
I have recently started implementing a deployment pipeline and I found below links quite useful:
MSBuild commands I used for Continuous Integration:
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_24.html
WebDeploy sync commands, I used for deployment packages to production server:
http://sedodream.com/2012/08/20/WebDeployMSDeployHowToSyncAFolder.aspx
Also I used these references:
Video about MSBuild on dnrtv.com
Microsoft Press book called "Inside the Microsoft® Build Engine: Using MSBuild and Team Foundation Build" which you can buy PDF version from Oreilly
Finally, "Continuous Delivery" book, gave me good ideas about deployment pipe line, although the book is not focusing on MSDeploy, but it is really worth reading.
The statement of documentation is typical of a MSFT 1.0 product, unfortunately MSDN no longer have dedicated Developer Technology Engineers to fill the gaps --- instead, there is a blind faith that the web will provide it.
I am actually considering dusting off my writing skills and write a short ebook on it since there is likely a market for it....
Msdeploy definitely has a touch of the PowerShell to it: power over simplicity rather than worse is better.
There is no Windows alternative to it, however you can hybridize some of its powers to make automated deployments. For example:
Compile your solution with Team City and msbuild
Use msdeploy to transform your site and web.configs on the build server
Manually FTP a ZIP file of your site (it doesn't support FTP)
Alternatively, use its remote deploy capabilities. This requires port 8172 open, lots of security changes and as far as I'm aware no concessions for load balancing
Use msdeploy on the live site to sync changes
As a tool it's clearly aimed at service providers as it's an enormous Swiss army knife. You can do all kinds of things to IIS with it, which for the most part are over kill for small businesses. I've no experience of large scale IIS setups so maybe that's where it shines.

What is the real benefit of release mode for ASP.Net

I know there are several questions asking the same question "Why should I use release mode". The problem I have with the answers is that they simply state, quite strongly, that you should always use release mode when a website is in production.
Why?
I understand that the code is optimised in the assemblies, but to what level? Is it going to optimise well written code? What kind of optimisations does it perform?
Are there any analyses regarding this? Is there anyway I can test the differences between debug and release?
I would really like someone who understands the why of this to at least provide a reference to some definitive reading material, as I have yet to find anything hard enough to satisfy my curiosity on this issue.
Read this first: http://blogs.msdn.com/tess/archive/2006/04/13/575364.aspx, I just found it as part of answering this question, its a great article.
See this question: At what level C# compiler or JIT optimize the application code? for some info on general compiler optimizations.
Also, keep in mind that for a Asp.Net web application changing to release mode will compile the assemblies in release mode but for the page compilations you may also need to edit the debug attribute of the compilation element in your web.config.
<compilation defaultLanguage="c#" debug="true">
Web applications do strange things when debug=true is set, for example they do not honor request timeouts because it would interfere with debugging.
Here is a great article from the Gu on the subject: Don’t run production ASP.NET Applications with debug=”true” enabled

Does it help to use NGEN?

Is it better to use NGEN an ASP.NET application when we know it is not going to change much? Or is the JIT good enough?
The only reason I asked was because this article by Jeffrey Richter in 2002 says :
And, of course, Microsoft is working quite hard at improving the CLR
and its JIT compiler so that it runs faster, produces more optimized
code, and uses memory more efficiently. These improvements will take
time. For developers that can't wait, the .NET Framework
redistributable includes a utility called NGen.exe.
NGen will only help startup time - it doesn't make the code execute any faster than it would after JITting. Indeed, I believe there are some optimizations which NGen doesn't do but the JIT does.
So, the main question is: do you have an issue with startup time? I don't know how much of an ASP.NET application's start-up time will be JITting vs other costs, btw... you should probably look at the Performance Manager graphs for the JIT to tell you how much time it's really costing you.
(In terms of availability, having multiple servers so you can do rolling restarts is going to give you much more benefit than a single server with an NGENed web app.)
NGen isn't the way to go for ASP.NET -- the creation of the .dlls in the bin folder isn't the final step -- they are compiled again with the web/maching.config settings applied into your C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files folder. Instead of NGen, to decrease initial load-time, use the Publish Website Tool or aspnet_compiler.exe
I'm not sure that NGEN's primary benefit is to start-up time alone - if the application's suffering from a high '% Time in JIT', this is listed as a potential remedy:
http://msdn.microsoft.com/en-us/library/dd264972(VS.100).aspx.
The discussion is closely related to a different question on how JIT'd machine code is cached and re-used?
I was looking into this tonight and came across the following:
The common language runtime cannot load images that you create with NGEN into the shared application domain. Because ASP.NET standard assemblies are shared and are then loaded into a shared application domain, you cannot use Ngen.exe to install them into the native image cache on the local computer.
http://support.microsoft.com/kb/331979
Not sure if this just refers to assemblies referenced from ASP.net app or the app itself?
NGen helps startup time. For example, the Entity Framework documentation says this about the benefit of running NGen:
Empirical observations show that native images of the EF runtime assemblies can cut between 1 and 3 seconds of application startup time.
The context is just the Entity Framework assemblies. NGenning other assemblies would provide additional speedup.

Resources