Handling TFS Branches and multiple folders for large solution ASP.NET website - asp.net

We're running TFS 2015 and VS.NET 2015 for a large solution with an ASP.NET web app as the main project and several class library projects.
I'd like our team to start utilizing branches but the concept of branches being in separate folders is causing all sorts of issues with configuration.
Once the branch is completed the entire folder structure and web.config values, project references, reference paths etc are all now different, as the solution is being opened from a different folder than the main branch.
We use IIS virtual directories so that also doesn't work due to the new folder for the branch.
If I go ahead and make all of these manual changes to make our solution work from the new branch folder, then every time we do a forward integration from main->branch all of this config of course gets overwritten, and every developer on the team would need to redo this config
Surely there's a better method to handle branches for larger solutions which have a high level of config and customization, is there a way to keep a single physical folder and just specify which branch you want to work on?

Don't use long term branches. After moving from using long-term branches to a single main branch four all our teams I would never go back. The merges were always terrible, even for seemingly simple changes.
We now use Release Readiness analysis to allow multiple developers to work in parallel on different features. Check it out -
https://dotnetcatch.com/2016/02/16/are-you-release-ready/

Related

How to keep different versions of my WSDLs in different git branches of my workflow

I have a project (web), that interacts heavily with a service layer. The client has different staging servers for the deployment of the project and the service layer, like this:
Servers from A0..A9 for developement,
Servers from B0..B9 for data migration tests,
Servers from C0..C9 for integration test,
Servers from D0..D9 for QA,
Servers from E0..E9 for production
The WSDLs I'm consuming on the website to interact with the service layer, change from one group of server to the other.
How can I keep different versions of the WSDLs in the different branches using a git workflow with three branches (master, dev, qa)?
As you explained in your comment, the branches will be merged, and the WSDL files will conflict. When that happens, you have to resolve the conflict by keeping the right version of the WSDL file.
For example, if you are on qa, merging from dev, and there is a conflict on a WSDL file, you can resolve it with:
git checkout HEAD file.wsdl
This will restore the WSDL file as it was before the merge, and you can commit the merge.
However, if there are changes in the WSDL file but there are no conflicts, then git merge will automatically merge them. If that's not what you want, and you really want to preserve the file without merging, then you could merge like this:
git merge dev --no-commit --no-ff
git checkout HEAD file.wsdl
git commit
UPDATE
To make this easier, see the first answer to this other question:
Git: ignore some files during a merge (keep some files restricted to one branch)
It offers 3 different solutions, make sure to consider all of them.
I think branching is the wrong tool to use. Branches are highly useful for (more or less) independent development that needs that take place in parallel and in isolation. Your use case of multiple deployment environments doesn't appear to fit that model. It also doesn't scale very well if you need multiple branches for e.g. releases and always have to create and maintain the deployment-specific child branches.
You would probably be better served by a single branch that defines multiple configurations (either for deployment or building, depending on what consumes the WSDL file).
This assumes that the WSDL file is the only different between the branches, which is the impression I get from the question.

Updating a single .NET assembly in a production environment

Does anyone know whether it is 100% safe to replace (copy + paste) an assembly with an updated version of itself, where all version history (AssemblyInfo.vb) is exactly the same but the only difference being that a minor code change took place in one of the aspx.vb files.
It is safe if you are sure you didn't break your existing code (method doesn't exist anymore, ...).
If you manually load assemblies make sure to update versions and tokens.
If you are not sure you can duplicate your website into another folder, create a test IIS instance and test the deployment of single files.
Keep in mind that a clean deployment is safe rather than single file deployments that may cause breaking changes if you are not extremely careful. This may not be a problem on test instances but should never be done on Live.
It is definitely possible and very easily doable but it can easily become a habit that can have a negative impact when the production system is becoming very large, possibly unstable and have a large number of users.
There are a couple of ways I would suggest trying or to be mindful of:
Create a installer or multiple installers depending on the projects
in you solution. These installer produce te excutable(s) to install
on the production server. This will be done manually. Backup the previous
installers for rollback.
You can create a ClickOnce application. This can be run manually and also can be scheduled using a stable scheduling app.
One can also make use of Visual Studio's publishing function. After
compiling code and setting it to release mode it gets publish via
file/network/ftp on the production space. This replaces all the markup files and assemblies.
The automate the process you can use a TFS Builder server and
schedule daily or weekly builds. These installations can be made
manually or one can use SMS(Microsoft System Management Services) to
schedule timely installations
One can use Windows Powershell as well
TFS Build Server and SMS carry cost implications of course but it
will be a small price to pay if problems on a production environment
can bring a company down.
There are a couple of ways to do this. See what might work and get into good habits when it comes to an production environment

Best way to deploy multiple instances of the same ASP.NET web application

Any creative minds out there want to help me think outside the box?
We have an ASP.NET web application which we need to deploy to say 100 machines. The web application is the same except for configuration files (web.config) and a few other files (scripts, mostly).
Our goals are:
To be able to upgrade the web application efficiently by sending out incremental changes which could also easily be reverted (on a per project or per file basis) in case of a problem.
Efficient support for binary files (as this web application contains compiled *.dlls)
Not require any compilation on the target machines.
Nice to haves would be:
Easy to remotely initiate an update
Opportunity to execute certain scripts (ex: database migrations) automatically post-update
We currently have scripts which commit a compiled version of each separate instance of the application to source control and copies over the appropriate configuration files to each folder. Each instance checks out (and later updates) a specific folder from the source control.
This process is good enough for 100 instances, but would not scale to 1000 instances as it is storing clones of all the binary files with every release (one per instance). I'm looking for better ways of doing this.
The most obvious answer appears to keep the same process but have only one folder to contain the files (one folder for all instances) and have scripts that override the appropriate files after an update on the target machines.
What's a better way of accomplishing this?

Step-By-Step ASP.NET Automated Build/Deploy

Seems like there are so many different ways of automating one's build/deployment that it becomes difficult to parse through all the different scenarios that people support in tutorials on the web. So I wanted to present the question to the stackoverflow crowd ... what would be the best way to set up an automated build and deployment system using the following configuration:
Visual Studio 2008
Web Application Project
CruiseControl.NET
One of the first things I tried was to have CCnet automatically zip the output and copy it to the server, but then that requires manual work to unzip at the destination. However, if we try to copy all the files individually, then it could potentially take a long time if it's a large application (build server lives outside of the datacenter in our office ... I know).
Also of particular interest is how we would support multiple environments as we have dev, qa, uat, and then of course prod.
MSDeploy seems really interesting, but unless I'm interpreting the literature incorrectly, doesn't help in the scenario of deploying from the output of a build server. If anything, it seems like it'll be useful in deploying one build across a build farm ... but even for deploying from one environment to another, one would have to manually change config settings and web service URLs, etc.
I recently spent a few days working on automating deployments at my company.
We use a combination of CruiseControl, NAnt, MSBuild to generate a release version of the app. Then a separate script uses MSDeploy and XCopy to backup the live site and transfer the new files over.
Our solution is briefly described in an answer to this question Automate Deployment for Web Applications?
You might be interested in MSDeploy. Here's a Scott Hanselman post on this. It's only available as a technical preview at the moment (September 2008) but is worth evaluating against your requirements.
There is another new build tool (a very intelligent wrapper) called NUBuild. Its lightweight, open source and extremely easy to setup and provides almost no-touch maintenance. I really like this new tool and we have made it standard tool for our continuous build and integration process of our projects (we have about 400 projects across 75 developers). Try it out.
http://nubuild.codeplex.com/
Easy to use command line interface
Ability to target all .Net framework
version i.e. 1.1, 2.0, 3.0 and 3.5
Supports XML based configuration
Supports both project and file
references
Automatically generates the “complete
ordered build list” for a given
project – No touch maintenance.
Ability to detect and display
circular dependencies
Perform parallel build -
automatically decides which of the
projects in the generated build list
can be built independently.
Ability to handle proxy assemblies
Provides visual clue to the build
process e.g. showing “% completed”,
“current status” etc.
Generates detailed execution log both
in XML and text format
Easily integrated with
Cruise-Control.Net continuous
integration system
Can use custom logger like XMLLogger
when targeting 2.0 + version
Ability to parse error logs
Ability to deploy built assemblies to
user specified location
Ability to synchronize source code
with source-control system
Version management capability
Do you have the ability to run commands remotely? The PsExec utility from Systinternals would let run a command line unzip program on the remote machine. If you have a script that copies the build as a .zip file to the remote site, you would just need one more line for the PsExec call to unzip the files.
I had a related question about getting a deployable set of files from an automated build. I found Web Deployment Projects (links and all in the old question) did what I needed - they're a VS and MSBuild add-on.
This is a common problem (and I wish I had read it sooner) for all development, not just ASP.NET. Being one of its developers, my team naturally uses BuildMaster internally for the entire release process, and for most scenarios it's free. Within the tool, we are able to perform all the standard CI builds to create artifacts and then set up an automation process to deploy these artifacts to any one of the 40+ servers we have internally or externally hosted, depending on the specific application or environment.
Since you specifically mentioned deployment to different testing environments, this is a fundamental aspect of the tool. The idea is to model the environment workflow (e.g. Integration -> QA -> Production) you already have in place and essentially promote a build all the way from source control to production. Most times, it's as simple as adding a deployment action that deploys an artifact to the environment, other times it can be much more complex.
You also casually mentioned configuration file changes are part of deployment, which is another built-in component to BuildMaster. The idea we had was to use the tool itself as the central hub for all configuration files and deployments, thus ensuring the latest changes are applied automatically with a simple "deploy configuration files" action in your deployment plan.
One thing you didn't mention with regard to this process is the database deployment aspect. Most ASP.NET applications require an associated database, otherwise they could just be static HTML files. It is crucial that the database schema gets updated to the appropriate database version with every deployment. There is, not surprisingly, a module within BuildMaster that handles this for you as well. The idea is to store DDL-DML scripts within the tool itself, and by executing scripts only once per environment, it ensures that all of your databases across each environment are up-to-date as your builds are deployed through them. Other scripts (e.g. stored procedures, views, triggers, etc.) are essentially code files and therefore belong in source control. These DROP-CREATE-CONFIGURE type scripts can be run each and every time in most cases with a simple deployment action.
Another piece of the deployment puzzle that most developers do not think about is process automation. Many developers need to perform sign-offs or fill out change request forms in order to manually perform these processes. Again, this is all available as part of the automated workflow setup within BuildMaster. You can setup blockers that do not allow promotion to say the QA environment unless all unit tests have passed, or block promotion to the Staging environment unless someone from the QA team approves the build and all issues in your issue tracking tool are resolved/closed for that particular release.
While I realize I left out CC.NET from the answer, our applications are all built and deployed through BuildMaster so we no longer need it, though we could however just as easily pickup the artifacts from a drop location and deploy them in later environments.
I see that many people use CC for their .NET projects, but why not use Jenkins, Sonarqube? They got all you need. I setup all this in 3 days. I have a Win 2008 server R2, MSSQL, Jenkins, VIsual SVN and Sonarqube.
It all works great and u get all metrics on your project. Sonarqube uses Gallio, Gendarme, FXcop, Stylecop, NDepths and PartCover to get your metrics and all this is pretty straight forward since SonarQube do this automatically without much configuration.
i post som pictures for u too get a feeling for it. Here is Jenkins witch builds and get Sonar metrics and a another job for deploying automatically to IIS
And Sonarqube, all metrics for my project. This is a simple MVC4 app, but it works great!:
If you want more information i can be more specific but i think you should at least consider jenkins. If CC suites you better, at least you looked at good alternative before you chose.
This whole setup uses MSBuild, too build and deploy the apps.

Improving Your Build Process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Resources