How to deploy dotnetcore/react -au Individual to Azure - sqlite

If you install the dotnetcore3 SDK and create the dotnetcore/react project, it compiles and runs fine. Modifications to use external identity providers are straightforward and work as documented. You will need to add packages for the providers you wish to support, such as Microsoft.AspNetCore.Authentication.MicrosoftAccount.
At this point you might try dotnet publish but the resultant package produces the following (truncated) stack trace:
info: IdentityServer4.Startup[0]
Starting IdentityServer4 version 3.0.0.0
crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]
Application startup exception
System.InvalidOperationException: Key type not specified.
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.LoadKey()
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.Configure(ApiAuthorizationOptions options)
Service Worker
The template is set up with a service worker. This is a jolly nuisance while debugging our configuration so turn it off by commenting out registerServiceWorker(); in ClientApp/src/index.js and if you've already run the app then you will need to flush your cache to dislodge it.
Certificate
A certificate is required. The project template uses OIDC implemented with IdentityServer4, and therefore requires a PFX. On Windows you can create one of these using CertReq. It would be poor security practice to add this to the project so I made the PFX file sibling to the project folder. The registration in appSettings.json looks like this:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "../cert-name.pfx",
"Password": "cert-password"
}
},
Secrets
dotnet add secret is strictly a development mode thing. We are expected to manually transcribe all the secrets to Azure environment variables and modify the program to include them in its configuration loading process.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
The names in dotnet secrets are full of colons. You'll also need to escape these colons as double underscores for cross platform compatibility.
Since dotnet add secret isn't exactly the most convenient tool ever it occurs to me that it might be less bother to just use environment variables all the way through.
Core version madness
Silly me trying to use the LTS version (3.1).
Creating a Classic CI pipeline from the Azure portal, it is impossible to select dotnet core 3.1 because it's not in the list. The list does contain LTS and Latest but both of these selections produce validation errors when you try to finalise the deployment. Choosing 3.0 allows finalisation which results in the deployment running but although it manages to publish to the Web App on Azure, the Web App is set to dotnet core 3.0 and since the project specifies 3.1 it won't start.
You can manually change this in the Web App Configuration blade in the Azure portal, but it just gets mangled on every deployment. Changing the project to use 3.0 and compatible packages seems to work.
Am I using the tools incorrectly, or is the Azure CICD set up really crap?
npm
And now it starts but can't find 'npm'. Installing npm using ssh looks like this (it's already root so sudo is not involved)
curl -sL https://deb.nodesource.com/setup_13.x | bash
apt-get install -y nodejs
and this seems to work but it doesn't survive a restart of the Web App (presumably it is installed outside /home)
Everything works without auth
If I deploy a project created with dotnet new react without the -au Individual qualifier, it works perfectly. The site loads, the web APIs are called, the data returns etc.
What's the difference? There are a couple.
IdentityServer4
SQLite
Generation of the SQLite database
Rummaging in the .csproj I find this
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.0.0">
and this is the first thing used in ConfigureServices
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
but this doesn't trigger the exception. That occurs later when IdentityServer is created. Further up the stack trace we find this:
Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore
.MigrationsEndPointMiddleware.Invoke(HttpContext context)
from which I conclude that EF uses Node to do migration, at least for SQLite.
I don't think it would help to add npm to package.json because that would just bundle it for delivery to the browser. It appears that npm is required at the server for the migration process.
But Node and npm are simply not part of the dotnet core Web App stack.
One suggestion (via Reddit) is to use a Node stack Web App and deploy a self-contained build of the dotnet core server code. This is my next port of call. In the spirit of solving one problem at a time I shall first learn to do self-contained build deployment with a minimal Core/React project (no auth).
This almost works. Using SSH I was able to run the app and it started without throwing any errors but listened on port 5000 rather than 8080 which is where it needs to be if you want it surfaced on port 80 on the public interface.
On the Node stack, the startup script is unsurprisingly configured to to launch a Node app, and it barfs before it gets to the startup command you supply. Because it's a Node startup script it also doesn't set up ASPNETCORE_URLS=http://*:$PORT which is required to make the core project serve on port 8080.

Taking a step back, npm is a development thing. Why would anyone deliberately introduce it as a production dependency? That would be crazy, it would create mayhem.
The key word in that question is "deliberately". What if it weren't deliberate? How could you do it accidentally? Well, you could write a script to gather all your environment variables and plonk them into Azure, and this might capture ASPNETCORE_ENVIRONMENT=Development
Lo and behold, there it was. Deleting it restarts the app and HURRAH! no more demands for NPM. Looks like the stack isn't broken after all. Which makes me a happy camper since I didn't want to give up CICD.
This could also be defined in appsettings.json.
The important takeaway is that if you see demands for npm after deployment to Azure, your app is trying to run in development mode in a production environment.

Related

Run integration test from IDE and cmd line fails with JHipster jwt secret empty

I'm running a micro service architecture with a problem on running some of the integration tests.
Running JHipster 5.0.2 on Mac against MySQL db.
LogsResourceIntTest is one example (generated by Jhipster with no modifications).
The following code aborts with a NP
this.secretKey = encoder.encodeToString(jHipsterProperties.getSecurity().getAuthentication().getJwt()
.getSecret().getBytes(StandardCharsets.UTF_8));
I have debugged it and the properties for the timeouts are set, but the token (secret) is empty.
Token is set in my /src/test/resource/application-test.yml file.
Running the test from cmd line also aborts with a NP. Running tests as follows:
./mvnw clean install -Dprofile=test
Any pointers on how to solve this problem
There's no such "test" profile in JHipster, so it can't be "generated by JHipster with no modifications". Using a profile that does not exist, you get properties from default profile.
Properties for tests are read from src/test/resources/config/application.yml because of test classpath.
Found the problem in the yml file.
Updated the file with some of my application's properties and had a format error.
I have used "test" profile" on other SpringBoot apps to help identify which files must be included for configuration. Reverted all back to default and fixed error in yml.
Will look at another way of identifying specific config files. The unit tests needs some of the Spring beans, but the integration test needs all (most of them).
Used the maven surefire and failsafe to split them before. I'm integrating to Google Pub/Sub and don't need all of that configured when doing unit tests.
Thank you for the help.

Why do we need to deploy a meteor app instead of just starting it?

As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.

Separate dev and prod Firebase environment

I am considering using Firebase as MBaaS, however I couldn't find any reliable solution to the following problem:
I would like to set up two separate Firebase environments, one for development and one for production, but I don't want to do a manual copy of features (eg. remote configuration setup, notification rules, etc.) between the development and production environment.
Is there any tool or method I can rely on? Setting up remote configuration or notification rules from scratch can be a daunting task and too risky.
Any suggestions? Is there a better approach than having two separate environments?
Before you post another answer to the question which explains how to set up separate Firebase accounts: it is not the question, read it again. The question is: how to TRANSFER changes between separate dev and prod accounts or any better solution than manually copy between them.
If you are using firebase-tools there is a command firebase use which lets you set up which project you are using for firebase deploy
firebase use --add will bring up a list of your projects, select one and it will ask you for an alias. From there you can firebase use alias and firebase deploy will push to that project.
In my personal use, I have my-app and my-app-dev as projects in the Firebase console.
As everyone has pointed out - you need more than one project/database.
But to answer your question regarding the need to be able to copy settings/data etc from development to production. I had the exact same need. A few months in development and testing, I didn't want to manually copy the data.
My result was to backup the data to a storage bucket, and then restore it from there into the other database. It's a pretty crude way to do it - and I did a whole database backup/restore - but you might be able to look in that direction for a more controlled way. I haven't used it - it's very new - but this might be a solution: NPM Module firestore-export-import
Edit: Firestore backup/export/import info here Cloud Firestore Exporting and Importing Data
If you're using Firebase RTDB, and not Firestore - this documentation might help:
Firebase Automated Backups
You will need to set the permissions correctly to allow your production database access to the same storage bucket as your development.
Good luck.
I'm not currently using Firebase, but considering it like yourself. Looks like the way to go is to create a completely separate project on the console. There was a blogpost up recommending this on the old Firebase site, looks to be removed now though. https://web.archive.org/web/20160310115701/https://www.firebase.com/blog/2015-10-29-managing-development-environments.html
Also this discussion recommending same:
https://groups.google.com/forum/#!msg/firebase-talk/L7ajIJoHPcA/7dsNUTDlyRYJ
The way I did it:
I had 2 projects on firebase- one for DEV other for PROD
Locally my app also had 2 branches - one named DEV, the other named PROD
In my DEV branch I always have JSON file of DEV firebase project & likewise for PROD
This way I am not required to maintain my JSONs.
You will need to manage different build types
Follow this
First, create a new project at Firebase console, name id as YOURAPPNAME-DEV
Click "Add android app" button and create a new app. Name it com.yourapp.debug, for example. New google-services.json file will
be downloaded automatically
Under your project src directory create new directory with name "debug" and copy new google-services.json file here
In your module level build.gradle add this
debug {
applicationIdSuffix ".debug"
}
Now when you build a debug build google-services.json from "debug" folder will be used and when you will build in release mode google-services.json from module root directory will be considered.
I'm updating this answer based on information I just found.
Step 1
In firebase.google.com, create your multiple environments (i.e.; dev, staging, prod)
mysite-dev
mysite-staging
mysite-prod
Step 2
a. Move to the directly you want to be your default (i.e.; dev)
b. Run firebase deploy
c. Once deployed, run firebase use --add
d. An option will come up to select from the different projects you currently have.
Scroll to the project you want to add: mysite-staging, and select it.
e. You'll then be asked for an alias for that project. Enter staging.
Run items a-e again for prod and dev, so that each environment will have an alias
Know which environment you're in
Run firebase use
default (mysite-dev)
* dev (mysite-dev)
staging (mysite-staging)
prod (mysite-dev)
(one of the environments will have an asterisk to the left of it. That's the one you're currently in. It will also be highlighted in blue)
Switch between environments
Run firebase use staging or firebase use prod to move between them.
Once you're in the environment you want, run firebase deploy and your project will deploy there.
Here's a couple helpful links...
CLI Reference
Deploying to multiple environments
Hope this helps.
We chose to fire up instances of the new Firebase emulator on a local dev server for Test and UAT, leaving GCP out of the picture altogether. It's designed exactly for this use-case.
https://firebase.google.com/docs/emulator-suite
This blogpost describes a very simple approach with a debug and release build type.
In a nutshell:
Create a new App on Firebase for each build type using different application id suffix.
Configure your Android project with the latest JSON file.
Using applicationIdSuffix, change the Application Id to match the different Apps on Firebase depending on the build type.
=> see the blogpost for a detailed description.
If you want to use different build flavors, read this extensive blogpost from the official firebase blog. It contains a lot of valuable information.
Hope that helps!
To solve this for my situation I created three Firebase projects, each with the same Android project (i.e. same applicationId without using the applicationIdSuffix suggested by others). This resulted in three google-services.json files which I stored in my Continuous Integration (CI) server as custom environment variables. For each stage of the build (dev/staging/prod), I used the corresponding google-services.json file.
For the Firebase project associated with dev, in its Android project, I added the debug SHA certificate fingerprint. But for staging and prod I just have CI sign the APK.
Here is a stripped-down .gitlab-ci.yml that worked for this setup:
# This is a Gitlab Continuous Integration (CI) Pipeline definition
# Environment variables:
# - variables prefixed CI_ are Gitlab predefined environment variables (https://docs.gitlab.com/ee/ci/variables/predefined_variables.html)
# - variables prefixed GNDR_CI are Gitlab custom environment variables (https://docs.gitlab.com/ee/ci/variables/#creating-a-custom-environment-variable)
#
# We have three Firebase projects (dev, staging, prod) where the same package name is used across all of them but the
# debug signing certificate is only provided for the dev one (later if there are other developers, they can have their
# own Firebase project that's equivalent to the dev one). The staging and prod Firebase projects use real certificate
# signing so we don't need to enter a Debug signing certificate for them. We don't check the google-services.json into
# the repository. Instead it's provided at build time either on the developer's machine or by the Gitlab CI server
# which injects it via custom environment variables. That way the google-services.json can reside in the default
# location, the projects's app directory. The .gitlab-ci.yml is configured to copy the dev, staging, and prod equivalents
# of the google-servies.json file into that default location.
#
# References:
# https://firebase.googleblog.com/2016/08/organizing-your-firebase-enabled-android-app-builds.html
# https://stackoverflow.com/questions/57129588/how-to-setup-firebase-for-multi-stage-release
stages:
- stg_build_dev
- stg_build_staging
- stg_build_prod
jb_build_dev:
stage: stg_build_dev
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_DEV_FILE} app/google-services.json
- ./gradlew :app:assembleDebug
artifacts:
paths:
- app/build/outputs/apk/
jb_build_staging:
stage: stg_build_staging
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
dependencies: []
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_STAGING_FILE} app/google-services.json
- ./gradlew :app:assembleDebug
artifacts:
paths:
- app/build/outputs/apk/
jb_build_prod:
stage: stg_build_prod
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
dependencies: []
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_PROD_FILE} app/google-services.json
# GNDR_CI_KEYSTORE_FILE_BASE64_ENCODED created on Mac via:
# base64 --input ~/Desktop/gendr.keystore --output ~/Desktop/keystore_base64_encoded.txt
# Then the contents of keystore_base64_encoded.txt were copied and pasted as a Gitlab custom environment variable
# For more info see http://android.jlelse.eu/android-gitlab-ci-cd-sign-deploy-3ad66a8f24bf
- cat ${GNDR_CI_KEYSTORE_FILE_BASE64_ENCODED} | base64 --decode > gendr.keystore
- ./gradlew :app:assembleRelease
-Pandroid.injected.signing.store.file=$(pwd)/gendr.keystore
-Pandroid.injected.signing.store.password=${GNDR_CI_KEYSTORE_PASSWORD}
-Pandroid.injected.signing.key.alias=${GNDR_CI_KEY_ALIAS}
-Pandroid.injected.signing.key.password=${GNDR_CI_KEY_PASSWORD}
artifacts:
paths:
- app/build/outputs/apk/
I'm happy with this solution because it doesn't rely on build.gradle tricks which I believe are too opaque and thus hard to maintain. For example, when I tried the approaches using applicationIdSuffix and different buildTypes I found that I couldn't get instrumented tests to run or even compile when I tried to switch build types using testBuildType. Android seemed to give special properties to the debug buildType which I couldn't inspect to understand.
Virtuously, CI scrips though are quite transparent and easy to maintain, in my experience. Indeed, the approach I've described worked: When I ran each of the APKs generated by CI on an emulator, the Firebase console's "Run your app to verify installation" step went from
Checking if the app has communicated with our servers. You may need to uninstall and reinstall your app.
to:
Congratulations, you've successfully added Firebase to your app!
for all three apps as I started them one by one in an emulator.
Firebase has a page on this which goes through how to set it up for dev and prod
https://firebase.google.com/docs/functions/config-env
Set environment configuration for your project To store environment
data, you can use the firebase functions:config:set command in the
Firebase CLI. Each key can be namespaced using periods to group
related configuration together. Keep in mind that only lowercase
characters are accepted in keys; uppercase characters are not allowed.
For instance, to store the Client ID and API key for "Some Service",
you might run:
firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID"
Retrieve current environment configuration To inspect what's currently
stored in environment config for your project, you can use firebase
functions:config:get. It will output JSON something like this:
{
"someservice": {
"key":"THE API KEY",
"id":"THE CLIENT ID"
}
}
Create the Tow project with Dev and production Environment on the firebase
Download the json file from thre
and setup the SDK as per : https://firebase.google.com/docs/android/setup Or for Crashlytics: https://firebase.google.com/docs/crashlytics/get-started?platform=android
First, place the respective google_services.json for each buildType in the following locations:
app/src/debug/google_services.json
app/src/test/google_services.json
app/google_services.json
Note: Root app/google_services.json This file should be there according to the build variants copy the json code in the root json file
Now, let’s whip up some gradle tasks in your: app’s build.gradle to automate moving the appropriate google_services.json to app/google_services.json
copy this in the app/Gradle file
task switchToDebug(type: Copy) {
description = 'Switches to DEBUG google-services.json'
from "src/debug"
include "google-services.json"
into "."
}
task switchToRelease(type: Copy) {
description = 'Switches to RELEASE google-services.json'
from "src/release"
include "google-services.json"
into "."
}
Great — but having to manually run these tasks before you build your app is cumbersome. We would want the appropriate copy task above run sometime before: assembleDebug or :assembleRelease is run. Let’s see what happens when :assembleRelease is run: copy this one in the /gradlew file
Zaks-MBP:my_awesome_application zak$ ./gradlew assembleRelease
Parallel execution is an incubating feature.
.... (other tasks)
:app:processReleaseGoogleServices
....
:app:assembleRelease
Notice the :app:processReleaseGoogleServices task. This task is responsible for processing the root google_services.json file. We want the correct google_services.json to be processed, so we must run our copy task immediately beforehand.
Add this to your build.gradle. Note the afterEvaluate enclosing.
copy this in the app/Gradle file
afterEvaluate {
processDebugGoogleServices.dependsOn switchToDebug
processReleaseGoogleServices.dependsOn switchToRelease
}
Now, anytime :app:processReleaseGoogleServices is called, our newly defined :app:switchToRelease will be called beforehand. Same logic for the debug buildType. You can run :app:assembleRelease and the release version google_services.json will be automatically copied to your app module’s root folder.
The way we are doing it is by creating different json key files for different environments. We have used service account feature as recommended by google and have one development file and another for production

"meteor" vs "meteor bundle" for production

For production why should I "bundle" the meteor application and not just copy
the sources on the server use the "meteor" command?
Basically what is the difference between:
"meteor bundle app.tar.gz", then installing the right version of fibers and nodejs
and extracting the archive and starting with "node main.js" the app,
and copying the project sources on the server and just writing "meteor" to start
the app?
This won't be an exhaustive list, but here are some things that the meteor command does:
creates a local database
watches on every dependent file in your app or in your packages
sends every file separately and unminified to the client (this is super inefficient unless you are developing locally)
In contrast, bundling an app:
does not create a local database
does not spend CPU watching your files for changes
creates two minified files (js and css) which is perfect for putting on a CDN or hosting from a reverse proxy. These are also efficient for clients to download and are highly cacheable.
In general, deploying shouldn't be a huge pain if you use a good set of scripts.
When using a bundle:
It will not spawn meteor-mongo(Mongodb inside meteor)
No hot reloads
Meteor will not watch your files.
You can leave/quit the server without killing your app.
You can manage node processes smoothly by using pm2 or other similar npm packages.
You can decide where to put your mongoDB and decide what port to use.
You can connect to your mongodb remotely by not having to run your meteor app.
While using a copy or running meteor command in the project directory:
You can't leave/quit the server while keeping the project running without using any screen multiplexers (e.g. tmux)
You can only use meteor's assigned mongodb which is spawned in localhost:3001 -- if meteor is using port 3000.
You are letting meteor to watch over file changes which uses CPU.
When your app dies, your db dies. :)

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources