ClearCase dynamic view contents not visible to ASP.NET web application - asp.net

I am running an asp.net application (VS2008/.net 3.5) and when running it under VS in debug more it works fine.
It reads files and directories from a network share happily.
I can run "cleartool startview [dynamic view name]" ok, I can "cleartool mount [vob]" happily.
But if I try to test for the existance of a file within the vob it can't see it.
So I tried something different, I now run "cleartool ls -short -vob_only [filename]" to test for existance.
For both of those it works fine running under VS2008 but won't work when running under IIS post deployment.
I have the set and the user has valid access to ClearCase.
Any ideas?

If you use the built-in webserver (Cassini) during development the webserver is running as you and have access to your networked drives etc. IIS on the other hand uses a different user account and doesn't share your user profile including your ClearCase views etc. That may explain your problems.

I found the problem and a resolution to this. The problem is that ClearCase is using the process user for authentication whereas the shared drive access mentioned in the original question is using the thread user.
The reason for this is the when accessing ClearCase it spawns off a new process - and child processes always use the parents process security context and not the current thread's.
The way around this was to run the web application within its own application pool which is running as a user with rights to access ClearCase. An inflation of rights beyond what should be really necessary however it did the trick.

Related

ASP.NET app cannot see files

I have an ASP.NET web app (VS 2017 Framework 4.5) that works fine in development. When I deploy to the web server and try to reference a file like this:
PdfBitmap tiffImage = new PdfBitmap(item.TIFPageLocation);
It returns a "Can't find file error". I even set up an if File.Exists() test and had the same result.
The file is a reference to another server like this: \\myserver\myvolume1\00\12\7A\00127A90.TIF.
When I run this path on the web server in WinExplorer it finds and open the TIF image in the default viewer. However the web app cannot see it for some reason.
This is likely some permissions issue but I'm not sure where.
Any ideas?
Thanks
Carl
Yes, this is likely a permissions issue. Your web app will probably be running under an account that has limited rights (almost certainly without the rights to access anything on the domain).
You either need to:
a) elevate their rights (be very careful with this)
b) run your site under a new user context created specifically for this site with rights to reach the other server
c) put your images somewhere easier to access.
Check your folder permission it should not have read access cross check that it's worked for me
It was the account the app pool identity was associated with. That need to be set properly.

Unable to get temp directory for .NET web site hosted in Azure App Service

We're working on validating our Loupe service to run as an Azure App Service and have run into a showstopper we can't figure out. Anything that attempts to resolve a temp directory fails with the exception:
mscorlib : System.IO.IOException
The directory name is invalid.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.__Error.WinIOError()
at System.IO.Path.InternalGetTempFileName(Boolean checkHost)
The stack trace has this within the .NET method for generating a temp file name. This stack trace is common to pretty much all the areas we get the failure. For a bit it seemed that if we forced the site to restart and/or forced the underlying App Service Plan to rescale it would go away until we next updated the site but no longer.
Since the only search results we could find said this error happens when impersonation is enabled and the user the site's impersonating doesn't have access to the IIS App Pool user's temp directory we've dug into that. First, we can confirm from our logging that the thread is not impersonating at the time the failed request is made. Second, just for fun we added this to the web.config to be doubly sure:
<system.web>
<identity impersonate="false"/>
</system.web>
All to no avail. If this was a generic problem with Azure App Services then I would presume it would break many systems, so I have to conclude we've done something fascinating and wrong to cause it.
This might not be the exact answer you're looking for but it might help point you in the right direction.
I had similar issues a while back using the Azure App Services. I found that accessing the local file system was somewhat problematic. Sometimes it worked fine and other times it didn't.
Eventually, I discovered that when an Azure App Service is instantiated, it doesn't always use the same drive letters for the system behind it. In some cases, this can cause the environment variables to be blatantly incorrect. They "think" they are set properly, but that's not always the case.
Generating a temp filename will use that environment variable for the path and if it's set to C: but the machine has a D: drive instead, if will fail. The C: drive doesn't exist and therefore the path to the temp file can't exist either.
To identify if this is the problem, you need to enable RDP so you can log into it directly. https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-role-enable-remote-desktop
It's the only way I was able to eventually figure it out.
If you open up the Kudu instance for your App Service Web App you'll be able to see what the local Temp directory is on the Managed VM underneath. You can access Kudu by going to "Advanced Tools" on the App Service blade in the Azure Portal, or by navigating to the https://{web app name}.scm.azurewebsites.net domain for your Web App.
Once in Kudu, click on Environment in the top navigation. The Temp directory is usually D:\local\Temp and that path is stored in the "TEMP" environment variable made accessible to your Web App.

ASP.NET File Monitoring errors in Event Log

We are getting frequent errors in the Event Viewer, Application section. The source is ASP.NET 4.0.30319.0, category is File Monitoring. The Event ID is 1185. Text says "Failed to start monitoring changes to "file-path-here" because the network BIOS command limit has been reached." Then there is a reference to Microsoft knowledge base article 810886.
The question is: what process or service is doing this file monitoring, and why? We are not aware of how this is running or how it started. The monitoring seems to look at various folders on our web site, some are .NET folders, some are not.
We are looking for explanation of what is causing this monitoring; then we will try to address the errors.
When a asp.net starts to run a site, is monitoring one basic file (on the root of the web site), this is the app_offline.htm and if he finds it is stop the program showing only this file.
If find that other file change is recompile them if this is nessesary, but still showing the app_offline.htm if exist and not run the site.
Ones you remove the app_offline.htm the web pages starts run again, but asp.net still monitoring for this file - if exist or not.
So this is the monitoring of the asp.net you search for. Of course this is the default behaviour of asp.net. If you have install other software or something else have been on computer and fill it with monitoring, this is something different. I assume that you have too many web sites asp.net on the same server ? 500 or more ? if not then maybe you start searching for other software that make this monitoring of your files.
Analysis
How to find your self whats happends, download the Handle from sysinternals and run it, make the out on a text file like handle.exe >> result.txt and look the results.
http://technet.microsoft.com/en-us/sysinternals/bb896655
Look there if any suspicious program have open huge amount of files and what program is that. Monitoring files and directories are shown like
runningprogram.exe pid: 1352 ServerName\User
AC: File (RW-) D:\Monitor1
E8: File (RW-) D:\Monitor2
F8: File (RW-) D:\Monitor3
408: File (RWD) D:\InetPub\MySite
More
I check on my servers and found that a blog creation program have add a monitoring on every directory blog - I do not know why - but this is the way they have made it, to monitoring every blog for some reason, maybe you have something similar that creates a lot of file/directory monitoring for some reason.
The monitoring is being done by IIS (or the aspnet process with IIS6). It's watching for changes to files so that the site can be recompiled when needed.
You didn't mention your environment, but I used to run into this problem frequently when trying to run websites from Windows XP when the sites were located on a remote file share. I think the error comes up due to a limitation in CIFS (the network stack for file shares). Windows Server didn't seem to have the same limitations.
So, a few possible fixes:
Switch to Windows Server (or possibly Win 7)
Switch to a Web Application (doesn't allow recompiles)
Move your files from a remote share to a local drive

System.Security.SecurityException when writing to Event Log

I’m working on trying to port an ASP.NET app from Server 2003 (and IIS6) to Server 2008 (IIS7).
When I try and visit the page on the browser I get this:
Server Error in ‘/’ Application.
Security Exception
Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application’s trust level in the configuration file.
Exception Details: System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and the location of the exception can be identified using the exception stack trace below.
Stack Trace:
[SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security.]
System.Diagnostics.EventLog.FindSourceRegistration(String source, String machineName, Boolean readOnly) +562
System.Diagnostics.EventLog.SourceExists(String source, String machineName) +251
[snip]
These are the things I’ve done to try and solve it:
Give “Everyone” full access permission to the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Security. This worked. But naturally I can’t do this in production. So I deleted the “Everyone” permission after running the app for a few minutes and the error re-appeared.
I created the source in the Application log and the Security log (and I verified it exists via regedit) during installation with elevated permissions but the error remained.
I gave the app a full trust level in the web.config file (and using appcmd.exe) but to no avail.
Does anyone have an insight as to what could be done here?
PS: This is a follow up to this question. I followed the given answers but to no avail (see #2 above).
To give Network Service read permission on the EventLog/Security key (as suggested by Firenzi and royrules22) follow instructions from http://geekswithblogs.net/timh/archive/2005/10/05/56029.aspx
Open the Registry Editor:
Select Start then Run. Enter regedt32 or regedit
Navigate/expand to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Security
3. Right click on this entry and select Permissions
Add the Network Service user
Give it Read permission
UPDATE: The steps above are ok on developer machines, where you do not use deployment process to install application.
However if you deploy your application to other machine(s), consider to register event log sources during installation as suggested in SailAvid's and Nicole Calinoiu's answers.
I am using PowerShell function (calling in Octopus Deploy.ps1)
function Create-EventSources() {
$eventSources = #("MySource1","MySource2" )
foreach ($source in $eventSources) {
if ([System.Diagnostics.EventLog]::SourceExists($source) -eq $false) {
[System.Diagnostics.EventLog]::CreateEventSource($source, "Application")
}
}
}
See also Microsoft KB 2028427 Fail to write to the Windows event log from an ASP.NET or ASP application
The problem is that the EventLog.SourceExists tries to access the EventLog\Security key, access which is only permitted for an administrator.
A common example for a C# Program logging into EventLog is:
string sSource;
string sLog;
string sEvent;
sSource = "dotNET Sample App";
sLog = "Application";
sEvent = "Sample Event";
if (!EventLog.SourceExists(sSource))
EventLog.CreateEventSource(sSource, sLog);
EventLog.WriteEntry(sSource, sEvent);
EventLog.WriteEntry(sSource, sEvent, EventLogEntryType.Warning, 234);
However, the following lines fail if the program hasn't administrator permissions and the key is not found under EventLog\Application as EventLog.SourceExists will then try to access EventLog\Security.
if (!EventLog.SourceExists(sSource))
EventLog.CreateEventSource(sSource, sLog);
Therefore the recommended way is to create an install script, which creates the corresponding key, namely:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\dotNET Sample App
One can then remove those two lines.
You can also create a .reg file to create the registry key. Simply save the following text into a file create.reg:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\dotNET Sample App]
The solution was to give the "Network Service" account read permission on the EventLog/Security key.
For me ony granting 'Read' permissions for 'NetworkService' to the whole 'EventLog' branch worked.
I had a very similar problem with a console program I develop under VS2010 (upgraded from VS2008 under XP)
My prog uses EnLib to do some logging.
The error was fired because EntLib had not the permission to register a new event source.
So I started once my compiled prog as an Administrator : it registered the event source.
Then I went back developping and debugging from inside VS without problem.
(you may also refer to http://www.blackwasp.co.uk/EventLog_3.aspx, it helped me
This exception was occurring for me from a .NET console app running as a scheduled task, and I was trying to do basically the same thing - create a new Event Source and write to the event log.
In the end, setting full permissions for the user under which the task was running on the following keys did the trick for me:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Security
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog
I try almost everything in here to solve this problem... I share here the answer that help me:
Another way to resolve the issue :
in IIS console, go to application pool managing your site, and note the identity running it (usually Network Service)
make sure this identity can read KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog (rigth-click, authorisations)
now change the identity of this application pool to Local System, apply, and switch back to Network Service
Credentials will be reloaded and EventLog reacheable
in http://geekswithblogs.net/timh/archive/2005/10/05/56029.aspx , thanks Michael Freidgeim
A new key with source name used need to be created under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application in the regEdit when you use System.Diagnostics.EventLog.WriteEntry("SourceName", "ErrorMessage", EventLogEntryType.Error);
So basically your user does not have permission to create the key. The can do the following depending of the user that you are using from the Identity value in the Application Pool Advanced settings:
Run RegEdit and go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog
Right click in EventLog key and the select Permissions... option
3.Add your user with full Control access.
-If you are using "NetworkService" add NETWORK SERVICE user
-If you are usinf "ApplicationPoolIdentity" add IIS APPPOL{name of your app pool} (use local machine location when search the user).
-If you are using "LocalSystem" make sure that the user has Administrator permissions. It is not recommend for vulnerabilities.
Repeat the steps from 1 to 3 for HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Security
For debugging with Visual Studio I use "NetworkService" (it is ASP.NET user) and when the site is published I used "AppicationPoolIdentity".
I ran into the same issue, but I had to go up one level and give full access to everyone to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\ key, instead of going down to security, that cleared up the issue for me.
Same issue on Windows 7 64bits.
Run as administrator solved the problem.
There does appear to be a glaringly obvious solution to this that I've yet to see a huge downside, at least where it's not practical to obtain administrative rights in order to create your own event source: Use one that's already there.
The two which I've started to make use of are ".Net Runtime" and "Application Error", both of which seem like they will be present on most machines.
Main disadvantages are inability to group by that event, and that you probably don't have an associated Event ID, which means the log entry may very well be prefixed with something to the effect of "The description for Event ID 0 from source .Net Runtime cannot be found...." if you omit it, but the log goes in, and the output looks broadly sensible.
The resultant code ends up looking like:
EventLog.WriteEntry(
".Net Runtime",
"Some message text here, maybe an exception you want to log",
EventLogEntryType.Error
);
Of course, since there's always a chance you're on a machine that doesn't have those event sources for whatever reason, you probably want to try {} catch{} wrap it in case it fails and makes things worse, but events are now saveable.
FYI...my problem was that accidently selected "Local Service" as the Account on properties of the ProcessInstaller instead of "Local System". Just mentioning for anyone else who followed the MSDN tutorial as the Local Service selection shows first and I wasn't paying close attention....
I'm not working on IIS, but I do have an application that throws the same error on a 2K8 box. It works just fine on a 2K3 box, go figure.
My resolution was to "Run as administrator" to give the application elevated rights and everything works happily. I hope this helps lead you in the right direction.
Windows 2008 is rights/permissions/elevation is really different from Windows 2003, gar.
Hi I ran into the same problem when I was developing an application and wanted to install it on a remote PC, I fixed it by doing the following:
1) Goto your registry, locate: HKLM\System\CurrentControlSet\Services\EventLog\Application(???YOUR_SERVICE_OR_APP_NAME???)
Note that "(???YOUR_SERVICE_OR_APP_NAME???)" is your application service name as you defined it when you created your .NET deployment, for example, if you named your new application "My new App" then the key would be: HKLM\System\CurrentControlSet\Services\EventLog\Application\My New app
Note2: Depending on which eventLog you are writing into, you may find on your DEV box, \Application\ (as noted above), or also (\System) or (\Security) depending on what event your application is writing into, mostly, (\Application) should be fine all the times.
2) Being on the key above, From the menu; Select "FILE" -> "Export", and then save the file. (Note: This would create your necessary registry settings when the application would need to access this key to write into the Event Viewer), the new file will be a .REG file, for the argument sake, call it "My New App.REG"
3) When deploying on PRODuction, consult the Server's System's administrator (SA), hand over the "My New App.REG" file along with the application, and ask the SA to install this REG file, once done (as admin) this would create the key for your applicaion.
4) Run your application, it should not need to access anything else other than this key.
Problem should be resolved by now.
Cause:
When developing an application that writes anything into the EventLog, it would require a KEY for it under the Eventlog registry if this key isn't found, it would try to create it, which then fails for having no permissions to do so. The above process, is similar to deploying an application (manually) whereas we are creating this ourselves, and no need to have a headache since you are not tweaking the registry by adding permissions to EVERYONE which is a securty risk on production servers.
I hope this helps resolving it.
Though the installer answer is a good answer, it is not always practical when dealing with software you did not write. A simple answer is to create the log and the event source using the PowerShell command New-EventLog (http://technet.microsoft.com/en-us/library/hh849768.aspx)
Run PowerShell as an Administrator and run the following command changing out the log name and source that you need.
New-EventLog -LogName Application -Source TFSAggregator
I used it to solve the Event Log Exception when Aggregator runs issue from codeplex.
Had a similar issue with all of our 2008 servers. The security log stopped working altogether because of a GPO that took the group Authenticated Users and read permission away from the key HKLM\System\CurrentControlSet\Services\EventLog\security
Putting this back per Microsoft's recommendation corrected the issue. I suspect giving all authenticated users read at a higher level will also correct your problem.
I hit similar issue - in my case Source contained <, > characters. 64 bit machines are using new even log - xml base I would say and these characters (set from string) create invalid xml which causes exception. Arguably this should be consider Microsoft issue - not handling the Source (name/string) correctly.
My app gets installed on client web servers. Rather than fiddling with Network Service permissions and the registry, I opted to check SourceExists and run CreateEventSource in my installer.
I also added a try/catch around log.source = "xx" in the app to set it to a known source if my event source wasn't created (This would only come up if I hot swapped a .dll instead of re-installing).
Solution is very simple - Run Visual Studio Application in Admin mode !
I had a console application where I also had done a "Publish" to create an Install disk.
I was getting the same error at the OP:
The solution was right click setup.exe and click Run as Administrator
This enabled the install process the necessary privilege's.
I had this issue when running an app within VS. All I had to do was run the program as Administrator once, then I could run from within VS.
To run as Administrator, just navigate to your debug folder in windows explorer. Right-click on the program and choose Run as administrator.
try below in web.config
<system.web>
<trust level="Full"/>
</system.web>
Rebuilding the solution worked for me

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources