I am currently trying to setup our complete development process (from dev to production).
We will be using Microsoft Team Foundation Server and I was wondering if there was way to put what version of programs you want in a build.
Let's say we are 20 programmers working on the same project and we only want to deploy changes done by one or two programmers. Is there a way to do that?
I was thinking about using continuous integration to our dev / QA server and than deploy what is ready and fully tested to our production servers.
Thanks for your help!
we only want to deploy changes done by one or two programmers. Is there a way to do that?
This leads to a much larger discussion of your branching and merging strategy. Basic answer: suggest you have those developers develop in their own dev branch. Publish from there when ready. Keep other devs out of that branch either by convention or setup security measures.
Their branch can be merged with the other developers at some point. That could be straightforward, or a fun time for someone managing/resolving merge conflicts.
Re: the comment
Are there any other ways of managing the development cycle of an ASP.NET application? It is very important that I can deploy what I want, when I want it?
Yes, you can absolutely pick and choose which features you want in a release/branch/build. Suggest looking into creating branches for the 'streams' of development(branches), and merge into a 'main' branch from those 'dev' branches. You can have many concurrent branches merging into one branch, culminating in the mix of features you want.
Related
My team uses salt-stack to manage our fleet of servers. Currently we have a single master, and two GitHub repositories for our SLS logic and Pillar data. The issue is when a team member needs to do work they have to "checkout" the salt master so that they can work uninterrupted on their branch. My question is, what are some better ways to solve this problem so that we can have multiple team members working on multiple branches without having to checkout the master?
One thing I've thought of is setting up multi-master so that we may have multiple "development" salt masters and then a true salt master that we execute automation from.
Any thoughts or feedback on the setup is welcomed!
one thing to do is separate out the dev, test, and prod environments completely like you are thinking. this will at least separate the prod environment from any testing accidents. next thing. implement localized develop systems. such as using kitchen-salt or local vms with mini versions of your dev environment. this will at least make development time faster as they are not relying on the centralized dev system that should be saved more more developed code, and is more for testing integration instead of overall development. if possible get a ci/cd system that can spin up tests against pr's to the git repos. that way you are testing the changes before you test the merge. basically local dev->PR->dev->test->prod with testing at each step.
We're a little aghast at how time consuming it is to develop syntactically correct ARM templates from scratch.
The Portal helps, but pushes out non-development ready templates (pretty hard to find what the bug is when all the templates use 'name' for the resource name, versus maybe something more verbose like ('microsoftStorageAccountResourceName', microsoftStorageAccountResourceLocation, microsoftStorageAccountResourceTags, etc.).
We understand that there are many ways to deploy -- but if at all possible, we'd like some assurances that ARM is the current preferred way and will continue to be the preferred primary means of scripting deployments via VSTS -- or is it sliding towards a different -- maybe more programmatic -- approach (eg: Powershell, CLI, other).
We're asking this because it looks like we will have to invest significant effort to create a resource library for this organisation (to decrease the need for all projects to become proficient at ARM deployment) -- and would prefer to do it using an approach that will be preferred by developers over the coming years, for maintainability objectives.
Thanks for any insight on which approach to recommend as the best investment.
Templates are going to be around for the foreseeable future... it really depends on whether you want to orchestrate the deployment yourself (imperative deployments using CLI, PS, SDK) or you want ARM to orchestrate the deployment (via templates). Happy to chat in offline if you want to discuss more - email bmoore at microsoft.
Writing this now one year after the original post: The answer to 'ARM Templates are still the preferred deployment mechanism?' probably depends on who you ask. "Preferred" by Microsoft according to their product strategy may be meant differently than preferred by actual users that, well, feel the pain of vendor strategy decisions. I had started with an Azure automation book that used PS scripting only; I was lead (maybe mis-lead?) then to the ARM Template deployment model, mainly by the Microsoft web documentations, but found out that those templates need so much rework that writing a PS script, or even writing an ARM Template from scratch, seems to be a more efficient way to go. In fact, I am confused at the moment about what the 'Best Practice' is, i.e. what method other developers actually use. Is there a community-established opinion on this matter, now in August 2019? Or is it all VSTS / 3rd-party IDEs nowadays?
What is the best branching and merging strategy for a small development team making a lot of concurrent development on the same project?
We need to be able to easily apply a hotfix to a production release while other developers are working.
I was looking at the TFS Branching Guidance guide from codeplex and can't seem to figure out what is the best option for us.
Thanks
Without knowing your organization or how your team develops, it is hard (maybe impossible) to make a recommendation.
In our organization, the majority of our development is organized around releases, so we did a "branch on release" approach. That works great for us. We also do bug fixes, so we've implemented a "branch on feature" approach off of the production line for bug-fixes.
If you have different people all working on different features that might make it to production at different times, a "branch on feature" approach might work.
If you are all working on the same development line, a single "development" branch might work for you.
It took us months to finalize our branching strategy (for 14+ team projects, around 80 developers, and multiple applications). I don't expect that it will take quite as long for a smaller organization, but definitely spend some quality time thinking about this, and consider bringing in some outside expertise to give you guidance.
In addtion to Robaticus, you need to figure out why you want to do the parallel development.
In my opinion it is all about what you want to isolate
If you do parallel development because there are multiple features that are developed, it depends whether the features need to be released in the same package (version / release / give it a name) and whether you want the different features to don't integrate with each other during development. The latter is important when the different features interfere with each other and you only want to spend time on the integration when you merge both features.
If you need isolation on any of these reasons, you have within one release multiple branches which you merge before you do the integration test.
If you do parallel development because you want to support multiple releases, then you need to know how many releases you need to support (how many release that are rolled out in production do you need to support, how many release do you have in pre-production, etc.). In this case, the advise is to have a branch per release that you need to support
It sounds that you need to have this branching strategy for your organization.
It also depends on the type of files that you are branching. If you have SSIS or SSRS files (or any other XML-based files), binary files, or any files that are not easy text (in contrast to C#), then it is not easy to merge the files from two different branches. You then need to have some manual involvement to actually merge these files!
So as Robaticus already said, we need more information on your specific scenario to give more detailed guidance.
95% of my time I program ASP.NET (MVC) web sites.
Should I care about MSBuild?
We use MSBuild with CruiseControl.Net to manage the builds of most of our big ASP.NET projects. For every commit of one member of the team, a build is launched. It helps us detect
quickely incompatibilities before moving a feature to "staging" or "production".
I think it is really usefull when working with a team on the same ASP.NET project or if you are working alone on a big project.
That depends on your development environment.
If you have other folks that do deployment of your systems, and they take care of the build and deployment environment, then MSBuild probably won't be necessary for your work.
On the other hand, if you need to configure the build script to understand special situations that your code comes up with, then you will definitely need to understand MSBuild scripts.
Even for a one-man shop, it's a useful tool to know, especially if you are configuring a continuous integration server like Hudson.
No. Until you have to.
Its not absolutely necessary to know MS Build, but it is useful to know.
It might not be needed for all kind of projects, but it is extremely useful when you are working on a huge code base with automated custom build solution/ nightly build/developer builds so on and so forth.
It's unlikely, unless you choose to use it, or you start to make use of Team Foundation Server's Team Build.
Your development processes need to get to a certain complexity before automated builds really deliver their true value and/or if you find need for automatic deployment (including database changes if applicable).
The coming Visual Studio 2010 is going to make it far easier to use, but for now it retains a fairly steep learning curve which you can avoid by using alternatives, or commercial products (e.g. Visual Build Pro, Final Builder etc).
The nice thing is that it is part of the .Net framework, so it's already available as long as you have the framework installed (which it probably is).
So, in short, not really. It's something very useful and powerful though, setting up deployments using MSBuild can be very, very useful.
What should a developer know about MsBuild?
Every developer should know it exists and it's basic capabilities. If know it exists you won't duplicate its features and will know what it can do for you, when you need it.
Minimum:
As an exercise, build your project through the command line: msbuild myproj.sln
Know the role of continuous integration
A little more than minimum:
Hack your csproj (or vbproj) with a message task, so it outputs something during clean.
All done. When you need to know more, you'll figure it out.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As recently as several years ago, the developers actually made the builds that went to clients. This was obviously a disaster for reasons too numerous to list.
Then when we started to learn the errors of our ways, we looked for a way to auto-build the entire application on a dedicated build machine. The culture at that time was very averse to bringing in outside tools, so we built our own autobuild system by writing a VB app.
This worked fine for a while, until the project's structure started to change, new projects were added, and we needed to build the application in different ways. Then then weaknesses of our hand-rolled autobuilder became apparent and, over time, increasingly onerous. This disease has progressed now to the point where QA (who owns our build process) can't even maintain the autobuilder because it requires more and more programming skill. Every time we add a project or change something in an existing project, it consumes more developer time just to make it work. There have been days when we were unable to produce a build because the system was broken.
I'm now in a position where I can change this process, and I'm looking to scrap the entire system and put something else in it's place. My goals are:
Have an autobuild system that can run with zero human interaction at a specific time every day. It should be able to gather all the source code, compile all the apps, create the setups, put the finished products on a network share, and possibly trigger the automated testing system to kick in (we use QTP).
The autobuild system should be flexible enough to easily adapt to changes in the project without rrequiring a major overhaul.
It should be simple enough so that QA can own the system and not require developer resources to make changes to how builds are made.
What are your experiences? Can you recommend an autobuild system? Should I have different goals?
I'm currently using CruiseControl integrated with Ant to control project builds. This allows flexibility of build schedules and means you can automate the entire build process fairly easily using Ant scripts. Also, during defect fixing periods you can have CruiseControl set up to watch for source control submissions instead of time periods and build when these occur. This allows developers very quick feedback on defect fixes.
I use FinalBuilder and FinalBuilder Server for nightly builds. It's a bit buggy at times, but if you think it through it's quite easy to create extensible projects that can build X project type, build it's database from change scripts and deploy it to a testing server.
It can also handle all kinds of wierd and wonderful things like ZIPing a nightly build and uploading it to an FTP or creating ISO images automatically.
Definitely look into MSBuild if you're on the Microsoft stack.
Joel is always going on and on about how great FinalBuilder is, so that might be worth a look as well.
We just migrated from a hand-rolled set of Perl scripts to a Buildbot setup. I found it because that's what Google's using for Chrome.
You can do nightlies, or it can integrate with source control to do an isolated test build whenever anybody does a checkin, or a variety of other things. It's also parallel; you can have more than one machine in the build farm, either for specialized duties or just to handle more load.
The entire system is written in Python, so it's platform-agnostic, which is important if you need to do builds on more than one platform. It can do anything you can do from the command line; we have it calling MSBuild for user-mode components, a DDK build for kernel-mode pieces, and running products for unit test builds.
Out of the box it supports most OSS source control tools, but if you're using TFS or something else you may need to modify the package that you install on the slave machines.
I think you are on the right track here.
Whoever looks after your automated build process needs to have a fundamental understanding of how your solution fits together. This doesn't necessarily mean knowing how to write code or architect solutions, but they will require a solid understanding of how the solution compiles, packages itself etc.
You might need to share responsibility for builds between people or teams to accomplish this. I'd say that a daily build is a "team responsibility".
I'd look at establishing a baseline build configuration which can be extended for "special use" builds (besides just building a release version), e.g. internationalized releases, fxCop/Quality Tools config, build + run Unit Tests, continuous integration builds, a build config to run on developer workstations, etc.
Instead, I'd aim to achieve the following:
Automatic versioning, signing etc
Ability to produce verbose output (logging) to help debug build breaks
On that point - it should handle errors properly, capture as much information and log it properly
Consistency - It should work the same way each time to produce repeatable outcomes
Run in a clean, limited access environment
Well commented/documented so that it can be understood by new staff, etc.
Option to generate release notes, compile metrics, produce reports (if this option is available)
Ability to deploy to multiple environments
Support different ways to obtain source code from source control, e.g. by changeset, label, date, etc
As for tool recommendations, I've used FinalBuilder, Visual Build Pro, MSBuild/Team Build, nAnt, CruiseControl and CIFactory plus and good old fashioned batch files.
Each has its pros and cons, I'm not going to make a recommendation except to say that the products with decent UI support were a little bit easier to work with, but at times were far less powerful. If you're working with VIsual Studio, MSBuild is very powerful, but has a somewhat steep learning curve.
As of tools delivered with MS Visual Studio you might want to use MSBuild. Additional Community toolsets for MSBuild will even give you the possibility to checkout code from Subversion and zip output.
We're using it successfully in our company. Projects consists of several solutions with 100+ subprojects. Works like a charm.
Visual Build Pro is nice, if your build machines are Windows. I think this would fill the requirement you have about QA owning the system. But don't get me wrong, it's pretty powerful.
We use CruiseControl.NET and UppercuT (which uses NAnt) to do this. UppercuT uses conventions for building so it makes it really easy for someone to get started by answering three questions (What is the solution named? What is the path to source control? What is your company's name?) and you are building.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
We use the Hudson buildbot for for big Java web app building from ant build scripts. Hudson is pretty sweet for our purposes. It has a master/slave setup so builds can be done concurrently (on a timer or on-demand). Slave nodes can be any OS/hardware combo provided it has the needed build tools already on it and is on the network (and won't crash every 10 min).
Full web-based interface including live console output, change logs, artifacts from the build are available across the network including previous builds (if successful). Awesomesauce!