Tridion 2011 SP1 : Issue while deploying the content - tridion

We are using Tridion 2011 sp1 without any hotfix and .net web application httpupload.aspx to deploy the content in filesystem.
We monitored and found there are two issues:
1) Some time pages those are published successfully in the publishing queue are not uploaded/updated in the file system.
2) Transport package is not created for the pages which are getting failed with the error:
Deploying FailedPhase: Deployment Processing Phase failed, Could not initialize class com.tridion.storage.StorageManagerFactory, Could not initialize class com.tridion.storage.StorageManagerFactory
Also in the deployer log file and transporter log file, there is no reference to failed item transaction id.
Can anyone help me out in this?

You must have some more detail on the failure in your logs than just this.
Could not initialize StorageManagerFactory will typically point to a misconfigured cd_storage_conf.xml or a jar missing.
If you get this occasionally then there must be something that fails occasionally (like your database connection or a file system).
Please scan through your deployer and/or core logs for additional information.
[UPDATE]
I think you may have a second deployer "listening" to the same incoming directory, and that 2nd deployer is broken.
Hints of that:
You say no transport package is created. I assume you mean you can't find the transport package - it must be created in the CM otherwise it can't fail. This means "someone" picked it up
"Sometimes they're published, sometimes not" == Sometimes they're picked up by the right deployer, sometimes they're picked up by the wrong one.
No references to the transaction in the logs
Search your server for all cd_deployer_conf.xml, and go compare all your "incoming" folder settings. You can only have one deployer per incoming folder.

Try following:
1) In the windows event logs identify the path of the Deployer that is getting loaded...generally it should be define by the Tridion_Home variable but there is also a roll up logic in place and it might also get picked up the deployer path from your application config on priority if you have placed the deployer config and bin folders with in your application bin folder for processing by the Tridion Content Delivery API
2) Check if the updated SQL JDBC jar file is present in the deployer bin folder
3) Verify that you do not have jre version between 1.6.0.26 to 1.6.0.30 installed on CMA and/or CDA server - check for both 32-bit as well as 64-bit version

Related

Alfresco Community v.5.1 does not start and loading the index.html forever

For some unexplained reason I can't use alfresco from yesterday.
Let me tell you how happens.
First of all, I didn't change any conf file or something like that.
I started tomcat and postgre services and after that, I tried to load "localhost:8080/share" but it was loading forever.
I tried to check the logs files, but no use, too. There is no error messages, nothing unusual.
After that, I deleted alfresco and share folder inside the "webapps", just in case, but it failed, too.
Finally, I can't stop these services from service manager, cause I am at work and I have no access privilege.
My main concern is that I don't even know the cause of this issue, so I don't even know how to ask for help.
When you don't have permission to delete the folder(share + alfresco) and stopping the services also. Without stopping the services, you can't delete the complete files from alfresco and share folders.
You need to find the problem is in Alfresco share, Alfresco Repo or database or tomcat.
Check Tomcat
Type http://localhost:8080 and make sure Tomcat is running or not.
Check Database
Connect your database service from Service manager, via PgAdmin tool to check the database service is running or not.
Check Repo
http://localhost:8080/alfresco - It should display some basic information about Alfresco Repo otherwise, it is clearly decided the Alfresco Repo itself is failed.
Check Share
http://localhost:8080/share - It should display the login page, if everything works fine.
Logs
Check and share, alfresco.log, share.log, solr.log, catalina, tomcatstdout and tomcatstderror logs files. Definitely, some of the error information would have recorded any of these logs files.

Deploying not happening in publishing process

I am trying to publish to local file system, however publishing is not happening properly and its failed to deploy in my 2011 GA VM environment.
I am getting "Polling for notification for destination: YTnMgU6u5Vh09cOGUG7ouA== has exceeded polling attempts for transaction: tcm:0-121257-66560" error in "Preparing Deployment" stage.
I have used the “Local File System” protocol in my publication target and I have provided path like d:\tridion\publish.
And I have provided the same path in cd_storage_conf.xml under the <storage type=”filesystem”>. All other storage types are commented.
And in cd_deployer_conf.xml , quee location path is c:\tridion\incoming.
When I publish any page into my publication target, the zipped package is placed in the d:\tridion\publish and it’s not deployed.
Do I need to do any other thing to deploy the zipped package?
The path provided in the cd_deployer_conf.xml (the one you specify in Queue/Location!!!) needs to be the same one you provide in your publication target (in your case you have in the publicationTarget some path on D drive while in the deployer conf you have another one from C drive). Then you also need make sure that your deployer is initialized. You can easily determine if your deployer is initialized by looking if the meta.xml is regenerated in the deployer incoming folder.
Not sure if this is relavant but you might be interested also in how to install the deployer: as a .NET WebSite, as a Java WebSite or Windows Service
Hope this helps.
You say your working sites use HTTP sender/deployer. In that scenario your deployer is triggered by the HTTP servlet which receives the transport package.
When you use local file system - you MUST configure your deployer to work in a different way. It has to run as some form of background service. Typically on a windows box this means installing the deployer as a windows service. Keep in mind that this will then probably have additional config files for the deployer and broker/storage.

Publishing failed

We are using Tridion 2011 SP1. Some of the pages/components are getting failed while publishing with below mentioned error.
Phase: Deployer Prepare Phase failed, Unable to unzip,
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process),
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process), Unable to unzip,
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process),
D:\Inetpub\TridionPublisherFS4SP\incoming\tcm_0-286137-66560.Content.zip (The process
cannot access the file because it is being used by another process)
Components/Pages are failing under stage Preparing Deployment, how should we fix it?
Do you have multiple Deployers using the same incoming location?
It looks like you’re running the Deployer as a WebApp – is the Deployer service also running on the system?
If you search for all files named “cd_deployer_conf.xml”, do they have the same incoming folder (D:\Inetpub\TridionPublisherFS4SP\incoming) defined?
Otherwise, you might use ProcMon to watch the folder and see what else is accessing the file.
If you still have this issue, you may try
1. deleting all files under incoming,
2. making sure there is no encryption enabled for the incoming folder (Some companies apply encrypt script immediately to the files that are added to the drive) or
3. making sure your antivirus is not screening that folder (As Nuno mentioned).
Do a restart of the deployer app and verify in the logs?

ORA-12154 could not resolve the connect identifier specified

I have switched over to the 64bit Windows 7 and created a simple web app to test the connection to the database. I am using VS 2010 - plain asp.net web project and I am running the application from within VS.
I am getting this error:
"ORA-12154 could not resolve the connect identifier specified"
I also have a sample console application that tests the connection to the database, and it works fine.
After googling it some, I found that a lot of posts online refered to permissions so I set my C:/Oracle permissions to read/write/execute for my ASP.net account, NETWORK SERVICE, COMPUTER NAME. That still won't solve the issue. I checked that my web app runs under my domain\username account and that this account that the rights to read/write/execute to the C:\Oracle folder.
I even re-installed my VS to make sure that it is in C:\Program Files rather than C:\Program Files(x86)
Any ideas to why my web app doesn't see the connection string? (while the console app does)
Not sure what else I can do.
I am going to assume you are using the tnsnames.ora file to specify your available database services. If so connection errors usually come down to two things.
The application cannot find the TNS entry you specified in the connection string.
The TNS entry was found, but the IP or host is not correct in the tnsnames.ora file.
To expand on number 1 (which I think is your problem). When you tell Oracle to connect using something like:
sqlplus user/pass#service
The service is defined in the tnsnames.ora file. If I attempt to connect with a service that is not defined in my tnsnames.ora, I get the error you get:
[sodonnel#home ~]$ sqlplus sodonnel/sodonnel#nowhere
SQL*Plus: Release 11.2.0.1.0 Production on Mon Oct 31 21:42:15 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
So you need to check a few things:
Is there a tnsnames.ora file - I think yes because your console can connect
Is there an entry in the file for the service - I think also yes as the console connects
Can the application find the tnsnames.ora?
Your problem may well be number 3 - does the application run as a different user than when you run the console?
Oracle looks for the tnsnames.ora file in the directory defined in the TNS_ADMIN environment variable - If you are running as different users, then maybe the TNS_ADMIN environment variable is not set, and therefore it cannot find the file?
Had a similar issue, only my web app was fine and it was SQLPlus that was giving me issues connecting, and the ORA-12154 could not resolve the connect identifier specified error. I had 11g and 12 Oracle clients installed. My environment variables were all set to point at my 12 instance:
ORACLE_HOME = C:\oracle\product\12
PATH = C:\oracle\product\12\bin;....
TNS_ADMIN = C:\oracle\product\12\network\admin
There is also a registry entry required at HKLM\Software\Oracle\KEY_OraClient12Home1, a string entry of TNS_ADMIN with the same path as the environment variable.
I have a tnsnames.ora at both C:\oracle\product\11\network\admin and C:\oracle\product\12\network\admin. As far as I know, both my web app and the 12 SQLPlus client I was using should have been using all 12 version variables.
My troubleshooting steps:
Change all environmental variables above from 12 to 11.
Connect with 11g's SQLPlus (worked!)
Change all environmental variables above back from 11 to 12.
Connect with 12's SQLPlus again (worked!)
So I don't really know what caused 12's SQLPlus to stop connecting, but this kind of reset may work for someone, so thought I'd document it here.
I have an Entity Framework web application that works on my local machine, but this error appears when pushed to another environment. There are other non-Entity Framework applications that work, and I'm able to connect with sqlplus.
Using sysinternals Process Monitor shows that tns names file is not being loaded correctly:
Following the documentation I tried to add a section giving the location of the tnsnames file like so:
<configuration>
<configSections>
<section name="oracle.manageddataaccess.client"
type="OracleInternal.Common.ODPMSectionHandler, Oracle.ManagedDataAccess, Version=4.122.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
</configSections>
<oracle.manageddataaccess.client>
<version number="*">
<settings>
<setting name="TNS_ADMIN" value="C:\Oracle\product\12.1.0\client_1\Network\Admin"/>
</settings>
</version>
</oracle.manageddataaccess.client>
<configuration>
However, this resulted in an immediate 500 server error.
Further investigation showed that the dll I was packaging with the web application was version 4.122.1.0, while the Oracle client environment installed on the machine was 4.121.2.0. As explained in the Oracle EntityFramework package documentation
Note: If your application is a web application and the above entry was added to a web.config and the same config section handler for "oracle.manageddataaccess.client" also exists in machine.config but the "Version" attribute values are different, an error message of "There is a duplicate 'oracle.manageddataaccess.client' section defined." may be observed at runtime. If so, the config section handler entry in the machine.config for "oracle.manageddataaccess.client" has to be removed from the machine.config for the web application to not encounter this error. But given that there may be other applications on the machine that depended on this entry in the machine.config, this config section handler entry may need to be moved to all of the application's .NET config file on that machine that depend on it.
I attempted to add a separate version section in the .NET machine.config without success (there existed a section for version 4.121.2.0 and I added a section for version 4.122.1.0). After I removed the "oracle.manageddataaccess.client" section from the machine.config, the above addition to the web.config resolved ORA-12154.
Solution #1 summary:
Remove "oracle.manageddataaccess.client" from .NET machine.config
Give TNS_ADMIN configuration setting in web.config as above
Solution 2
While researching this problem I found that the TNS_ADMIN environmental variable was not set. I created a new environmental variable called TNS_ADMIN and set the value to "C:\Oracle\product\12.1.0\client_1\Network\Admin". I removed the web.config changes, and removed the "oracle.manageddataaccess.client" section from .NET machine.config, but still received ORA-12154. Only after I restarted the machine did this resolve the issue.
Solution #2 summary:
Create a new environmental variable called TNS_ADMIN and set the value to "C:\Oracle\product\12.1.0\client_1\Network\Admin"
Restart machine
Solution 3
I added an entry for the correct version in the registry and this resolved the issue:
HKLM\Software\Wow6432Node\Oracle\ODP.NET.Managed\4.121.2.0
The name of the key is TNS_ADMIN and this points to the folder containing the tnsnames file:
C:\Oracle\product\12.1.0\client_1\network
Not the C:\Oracle\product\12.1.0\client_1\network\admin folder.
There can be so many issues but if you are using oracle 10g , uninstall oracle 10g and also remove the value from registry and install oracle 11g. But if you are using oracle 11g , first go and check on registry if it is pointing to right home. Sometimes there can be more than one home because you install sql developer again and again . In that case either you can remove unnecessary home registry value or you can add tns and sql.net file to all of those home, that might resolve the issue. I resolved mine in that way.
I had the same issue. In my case I was using a web service which was build using AnyCPU settings. Since the WCF was using 32 bit Oracle data access components therefore it was raising the same error when I tried to call it from a console client. So when I compiled the WCF service using the x86 based setting the client was able to successfully get data from the web service.
If you compile as "Any CPU" and run on an x64 platform, then you won't be able to load 32-bit dlls (which in our case were the Oracle Data Access components), because our app wasn't started in WOW64 (Windows32 on Windows 64). So in order to allow the 32 bit dependency of Oracle Data Access components I compilee the web service with Platform target of x86 and that solved it for me
As an alternative if you have 64bit ODAC drivers installed on the machine that also caused the problem to go away.
If you are using LDAP, make sure that the environment variable "TNS_ADMIN" exists and points to the folder containing the file "ldap.ora".
If this variable does not exist, create it and restart Visual Studio.
Another obnoxious error type I've encountered in Oracle 11: entries in tnsnames.ora that don't begin at the first column of the line (' XXX=(...)' instead of 'XXX=(...)') and are parsed together with the preceding entry, making it malformed.
Like replaced orr misplaced tnsnames.ora files, this type of problem is easy to diagnose with the tnsping command-line utility: you pass it the name of a tnsnames.ora entry and it gives the complete text of its definition.
Please let me repeat what Stephen said since I missed it the first time myself. The TNS_ADMIN environment variable and ORACLE_HOME is set to C:\instantclient_11_2 and th tnsnames.ora file is in there. Found the answer on this link.
This error (and also ORA-6413: Connection not open) can also be caused by parentheses in the application executable path and a bug in the 10.2.0.1 or lower oracle client libraries.
You should either upgrade your oracle client library or change the executable path.
Further details see:
http://blogs.msdn.com/b/debarchan/archive/2009/02/04/good-old-connectivity-issue.aspx
http://social.technet.microsoft.com/Forums/de-DE/ab662d63-6385-4f73-b27f-d526048f601f/connecting-to-oracle-on-64bit-x64-machine
use process monitor and search for name not found log for tnsnames.ora file.
check your environment variables. if not valid than uninstall all oracle client and reinstall.
I had this error in Visual Studio 2013, with an SSIS project. I set Project, Properties, Debugging, Run64BitRuntime = false and then the SSIS package ran. However, when I deployed the package to the server I had to set the value to true (Server is 64 bit Windows 2012 / Sql 2014 ).
I think the reasoning behind this is that Visual Studio is a 32 bit application.
This is an old question but Oracle's latest installers are no improvement, so I recently found myself back in this swamp, thrashing around for several days ...
My scenario was SQL Server 2016 RTM. 32-bit Oracle 12c Open Client + ODAC was eventually working fine for Visual Studio Report Designer and Integration Services designer, and also SSIS packages run through SQL Server Agent (with 32-bit option). 64-bit was working fine for Report Portal when defining and Testing an Data Source, but running the reports always gave the dreaded "ORA-12154" error.
My final solution was to switch to an EZCONNECT connection string - this avoids the TNSNAMES mess altogether. Here's a link to a detailed description, but it's basically just: host:port/sid
http://www.oracledistilled.com/oracle-database/oracle-net/using-easy-connect-ezconnect-naming-method-to-connect-to-oracle-databases/
In case it helps anyone in the future (or I get stuck on this again), here are my Oracle install steps (the full horror):
Install Oracle drivers: Oracle Client 12c (32-bit) plus ODAC.
a. Download and unzip the following files from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-win64-download-2297732.html and http://www.oracle.com/technetwork/database/windows/downloads/utilsoft-087491.html ):
i. winnt_12102_client32.zip
ii. ODAC112040Xcopy_32bit.zip
b. Run winnt_12102_client32\client32\setup.exe. For the Installation Type, choose Admin. For the installation location enter C:\Oracle\Oracle12. Accept other defaults.
c. Start a Command Prompt “As Administrator” and change directory (cd) to your ODAC112040Xcopy_32bit folder.
d. Enter the command: install.bat all C:\Oracle\Oracle12 odac
e. Copy the tnsnames.ora file from another machine to these folders: *
i. C:\Oracle\Oracle12\network\admin *
ii. C:\Oracle\Oracle12\product\12.1.0\client_1\network\admin *
Install Oracle Client 12c (x64) plus ODAC
a. Download and unzip the following files from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/database12c-win64-download-2297732.html and http://www.oracle.com/technetwork/database/windows/downloads/index-090165.html ):
i. winx64_12102_client.zip
ii. ODAC121024Xcopy_x64.zip
b. Run winx64_12102_client\client\setup.exe. For the Installation Type, choose Admin. For the installation location enter C:\Oracle\Oracle12_x64. Accept other defaults.
c. Start a Command Prompt “As Administrator” and change directory (cd) to the C:\Software\Oracle Client\ODAC121024Xcopy_x64 folder.
d. Enter the command: install.bat all C:\Oracle\Oracle12_x64 odac
e. Copy the tnsnames.ora file from another machine to these folders: *
i. C:\Oracle\Oracle12_x64\network\admin *
ii. C:\Oracle\Oracle12_x64\product\12.1.0\client_1\network\admin *
* If you are going with the EZCONNECT method, then these steps are not required.
The ODAC installs are tricky and obscure - thanks to Dan English who gave me the method (detailed above) for that.
Use this link.on Microsoft Support
I gave permission to IUSR_MachineName user on oracle home folder and I was able to resolve the problem
If your password contains # then you have to change it, most of Oracle products don't accept the # character in password
In my case, the reason for this error was that I was missing the tnsnames.ora file under client_32\NETWORK\ADMIN
Go to Control panel -> search for 'Environment Variables' and click on 'edit the system Environment Variables for your account'
make sure it is not 'edit Environment Variables for your account'
Click on 'Environment Variables' button then on 'System Variables' list click on 'New'
Fill the boxes as following:
Name: TNS_ADMIN
Value: [path of tns file on your PC], ex: D:\app[userName]\product\11.2.0\client_1\Network\Admin
Restart You PC [Important to apply changes]
In my case, I changed the connection string from user/pass#19.168.x.x:portNum/SID to user/pass#alias where alias is the alias provided in the tnsnames.ora file and it worked.

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources