Dev and deploy management with SVN of a Web Site - asp.net

Net solution for a website, consisting of 5 projects, and there are a few(less than 10) developers working on the solution. We deploy almost on a daily basis.
The question is, how to setup the SVN repo to support this scenario (the daily deploy), also mentioning that not every commited file should go to production, there is a QA check before deploying.

Try out TeamCity
(CI tool) as its free for smaller amounts of CI. this may be better for you than CruiseControl.Net as CCNET is very configuration heavy as its all done via XML. TeamCity uses wizards to create the scripts to manage releases
if you need any other help on CI then let me know as its something I am evangelistic about.

What you want to do is commonly referred to as Continuous integration (CI).
While you can do that using Subversion, it is probably not the right tool for the job.
There is special CI software, which will allow you to easily automate the necessary tasks (checkout from version control, compiling / building, running automatic tests, deployment etc). An example would be CruiseControl.NET.
As to "not every commited file should go to production", the common solution is to have a special "release" branch, which gets deployed. Only tested code is merged there (or have the trunk always be stable, otherwise same model). Of course, you can also (better: additionally) have tests before your automatic deployment, and only deploy if all tests pass.
Working with a release branch
In practice, this means that people check in their code as they produce it. Sometimes this code will work, sometimes not. When the release time draws nearer, a "release branch" is created in Subversion. This release branch is then effectively a frozen snapshot of the source as it was at the time of branching. Now this branch can be used to compile & deploy the application, which can then be tested.
No new code is checked into the branch (but checkins can continue elsewhere). Only if a bug is detected in the branch, will there be a checkin into the branch to fix it. This continues until the branch passes all tests. Then the branch can be released as a new version of the software; afterwards the branch will only be used if the released version needs to be patched.
Of course, any bugfixes checked into the branch need to also be put into the trunk (either by merging branch -> trunk, for which Subversion provides special support, or by reimplementing the fix in the trunk, as appropriate).

Related

Version maintenance of artifacts in artifactory, when we do bug fixing in various environments

We use artifactory as a repo manager. We store packages for releases in repo libs-release-local and snapshot packages in libs-snapshot-local.
For example: If we send the war (web app) into test, it should be from libs-snapshot-local and if it is into stage & prod it will be from libs-release-local.
I will say a scenario, where I needed help below:
Once the war is certified good on test server we would send the same code to stage.
We saw a bug after deployed into stage, so we changed the code and again build it, it obviously cannot go to same versioned release(as in artifactory, we cannot override releases).
So,
What will happen if we recognized 10 bug fixes one at a time, after each deployment in stage ?
What if we have realized there are bug fixes after going to prod.
Artifactory will have bulk of folders with so many version names & folders. Is that good practice? Or else anything senses wrong in our procedure?
Thanks!
Suggest to read Binary repository management Refcard first.
You need define your strategy for your folders and wars (web application), it is already good to use different repo for different purpose (snapshot/release)
The process for maintenance is simple
fix the bug and increase the version, send to the libs-snapshot-local for testing
after testing, a.k.a QA passed, the packages are promoted to release/stage repo libs-release-local again for public use again.
In this case, bug fix is the same as normal development procedure.
or you can refine the questions to make your questions more clear.

What does Production Phase ,Development Phase and deployment means in Visual studio 2010

I am not looking for Definitions for all these terms but interested in knowing "How Developers work when they are developing some software in ASP.net"
My senior has divided the project in 2 folders .first one is Development:D second is Subversion:S folder.Both folders contains same files.
While programming(development)on which folder should I work.( i mean on which file should I open to start programming : D folder or S folder ).
We are using Subversion and visual studio 2010.
If any one could explain me "Deployment scenario " then I would be very thankful to them.
Thanks.
We use SVN at our site (TortoiseSVN as client).
Each project has its own repository with three types, a Trunk, a Releases branch, and Branches. Trunk is a work in progress, Releases are snapshots of the source for major releases, and a new Branch gets created for a particular task. Upon completion a branch will get merged back into Trunk, and eventually a release gets created from Trunk.
Development phase and Production phase seem confusing, but I'm guessing development means code in the process of being developed, and production means code that has been released into the production environment. You generally don't want to do development in the production environment, you should start in development phase and once the functionality is thoroughly tested, promote it to production.

Flyway-1.7: workaround for dealing with branching? (Flyway issue 138)

Flyway is a very nice tool to automate database updates (also called migrations). However, as of version 1.7 it relies on a completely linear sequence of migrations. This assumption is immediately void if you have a production system for which you have to deliver fixes while you are already developing new stuff. The FAQ argues correctly that this is a non-issue for the production system itself, but if you have development and/or QA-systems that already on the development branch, you need to run migrations from the fixes for the production version out of band.
A solution that would allow this is pending with Issue 138, but is not done yet. Since this is pretty much a deadly problem: are there any clever workarounds if I want to use it right now?
The approach I recommend (and which becomes almost essential in a Continuous Delivery/deployment) environment is using Feature Toggles and release from HEAD, instead of using Feature or Release branches. This is then combined with backward compatible migrations to complete alleviate this problem.
If for some reason that isn't an option for you, you don't have to wait very much longer.
Flyway 1.8 (which will include the fix for 138) will be out soon.
The problem is obsolete since Flyway version 2.0: if you set the outOfOrder flag then flyway will also execute migrations with earlier version numbers that have not been applied yet. You need however to make sure that such out of band migrations can be applied in any order to the later migrations, or you will run into serious trouble.
With Flyway-1.7 you could make the following workaround. If you have a development and a production branch, you could have separate instances of flyway including separate metadata tables (say, SCHEMA_HISTORY and SCHEMA_HISTORY_DEV) for the production and the development branch. On the production server there is only the SCHEMA_HISTORY and you work as usual; for the development server you have both, and each time you run flyway you first run it on the production branch sqls with the SCHEMA_HISTORY and then on the development branch sqls with the SCHEMA_HISTORY_DEV.
When you switch branches you have to merge the SCHEMA_HISTORY_DEV into SCHEMA_HISTORY. (You need to exclude the initial revision and reset the CURRENT_VERSION on SCHEMA_HISTORY.) And when flyway-1.8 comes out, you do this merge and throw SCHEMA_HISTORY_DEV away.

MsTest noob - how to set up testing infrastructure the right way

We are a MSFT shop with a far-reaching MSDN license.
After many years of doing things wrong, we finally have to start doing automated testing.
My group is the Guinea pigs at this. We need to create what was not there before. We looked at the multitude of options out there. Some people get by just fine with open-source alternatives such as CC.Net, Bamboo, MbUnit, etc. We want to give MsTest, CodedUI, Team Build a good try ... might as well because of MSDN licensing and MSFT focus.
The plus and minus of doing things the MSFT way is that MSFT makes monolithic things. You have got to install various tools that play with each other nicely, but with outsiders - not necessarily. The plus is that when things are done correctly, it should all function rather smoothly. There is the option of gated check-ins, of using TFS to store the reports, etc.
Frankly, I am confused by all of the options. Our traditional build system was hacked together with a bunch of perl, batch scripts, executables, but now the build team switched to Team Build, which ought to be cleaner, but for the most part it is just a wrapper to the same old perl crap.
I am inclined to hack things together for testing too, because I can at least see what the pieces are. So, I envision the poor man's version as:
* A dedicated fast computer to run tests
* Some script to copy build files (test code as well as product code) over to that computer.
* A batch/perl script which would run mstest.exe from command line and execute a few test batches on some by-category filter within some test dlls (the product is so huge, that we do want to organize tests by various categories).
* Some script which will invoke the latter script remotely from the build server using psexec.exe (http://technet.microsoft.com/en-us/sysinternals/bb897553), as well as grabbing the xml output from a shared drive, and then sending out an email with results to those who are interested.
This can probably work, but then I have to worry about how well error handling can work with so many potential points of failure. It would be nice to configure things the "right way", taking advantage of whatever MSFT has cooked up. I am just not sure where to turn for a good guide. Have you done something like this?
Eventually we will want to have a farm of test computers, if we are to run out of the allotted time. Something else of concern is - for coded ui tests to succeed, I think a user has to be logged in, so I am not sure if psexec will be of much help here.
Can you share your positive/negative experience, point me to a good guide perhaps? Thanks!
Here are some tips off the top of my head if you want to get started with testing using the MS tools:
If you have an MSDN subscription, install a Test Rig by installing the Test Controller on your network and the the Test Agent service on each of the machines that will be collecting diagnostic data. See the following link for reference: http://msdn.microsoft.com/en-us/library/dd293551.aspx.
Add a Test Project to your solution. See the first part of the following blog post: http://blogs.microsoft.co.il/blogs/eranruso/archive/2010/03/27/visual-studio-2010-coded-ui-test-user-guide-create-a-simple-coded-ui-test.aspx.
Automated test options can be configured through the .testsettings file(s) that are added automatically when you add a Test project (you can also manually add these files to your solution).
Install Team Foundation Server (2010 recommended) in order to take advantage automating your tests with a daily build. You will also need TFS 2010 if you want to use the VS2010 Test Manager tool to define test environments and plan manual tests (these can be fully automated with CodedUI). Customize your new automated build to setup / deploy your application after build and set the build to run tests. Deployment will likely not be necessary for unit tests, but they will be for Web Performance and CodedUI test types.
If you have VS Ultimate or Test Professional licenses, you can also go further and set up virtual test labs using "Lab Management" features.

What artifacts to save for a nightly build?

Assume that I set up an automatic nightly build. What artifacts of the build should I save?
For example:
Input source code
output binaries
Also, how long should I save them, and where?
Do your answers change if I do Continuous Integration?
You shouldn't save anything for the sake of saving it. you should save it because you need it (i.e., QA uses nightly builds to test). At which point, "how long to save it" becomes however long QA wants them.
i wouldn't "save" source code so much as tag/label it. I don't know what source control you're using, but tagging is trivial (performance & disk space) for any quality source control system. Once your build is tagged, unless you need binaries, there really isn't any benefit to just having them around because you can simply re-compile when necessary from source.
Most CI tools let you tag on each successful build. This can become problematic for some systems as you can easily have 100+ tags a day. For such cases I recommend still running a nightly build and only tagging that.
Here are some artifacts/information that I'm used to keep at each build:
The tag name of the snapshot you are building (tag and do a clean checkout before you build)
The build scripts themselfs or their version number (if you treat them as a separate project with its own version control)
The output of the build script: logs and final product
A snapshot of your environment:
compiler version
build tool version
libraries and dll/libs versions
database version (client & server)
ide version
script interpreter version
OS version
source control version (client and server)
versions of other tools used in the process and everything else that might influence the content of your build products. I usually do this with a script that queries all this information and logs it to a text file that should be stored with the other build artifacts.
Ask yourself this question: "if something destroys entirely my build/development environment what information would I need to create a new one so I can redo my build #6547 and end up with the exact same result I got the first time?"
Your answer is what you should keep at each build and it will be a subset or superset of the things I already mentioned.
You can store everything in your SCM (I'd recommend a separate repository), but in this case your question on how long you should keep the items looses sense. Or you should store it to zipped folders or burn a cd/dvd with the build result and artifacts. Whatever you choose, have a backup copy.
You should store them as long as you might need them. How long, will depend on your development team pace and your release cycle.
And no, I don't think it changes if you do continous integration.
This isn't a direct answer to your question, but don't forget to version control the nightly build setup itself. When the project structure changes, you may have to change the build process, which will break older builds from that point on.
In addition to the binaries as everyone else has mentioned I would recomend setting up a symbol server and a source server and making sure you get the correct information out and into those. It will aid in debugging tremendously.
We save the binaries, stripped and unstripped (so we have the exactly same binary, once with and once without debug symbols). Further we build everything twice, once with debug output enabled and once without (again, stripped and unstripped, so every build result in 4 binaries). The build is stored to a directory according to SVN revision number. That way we can always retain the source from the SVN repository by simply checking out this very revision (that way the source is archived as well).
A surprising one I learned about recently: If you're in an environment that might be audited you'll want to save all the output of your build, the script output, the compiler output, etc.
That's the only way you can verify your compiler settings, build steps, etc.
Also, how long to save them for, and where to save them?
Save them until you know that build won't be going to production, iow as long as you have the compiled bits around.
One logical place to save them is your SCM system. Another option is to use a tool that will automatically save them for you, like AnthillPro and its ilk.
We're doing something close to "embedded" development here, and I can tell you what we save:
the SVN revision number and timestamp, as well as the machine it was built on and by whom (also burned into the build binaries)
a full build log, showing whether it was a full/incremental build, any interesting (STDERR) output the data baking tools produced, a list of files compiled and any compiler warnings (this compresses very well, being text)
the actual binaries (for anywhere from 1-8 build configurations)
files produced as a side effect of linking: a linker command file, address map, and a sort of "manifest" file indicating what was burned into the final binaries (CRC and size for each), as well as the debugging database (.pdb equivalent)
We also mail out the result of running some tools over the "side-effect" files to interested users. We don't actually archive these since we can reproduce them later, but these reports include:
total and delta of filesystem size, broken down by file type and/or directory
total and delta of code section sizes (.text, .data, .rodata, .bss, .sinit, etc)
When we have unit tests or functional tests (e.g. smoke tests) running, those results show up in the build log.
We've not thrown out anything yet -- given, our target builds usually end up at ~16 or 32 MiB per configuration, and they're fairly compressible.
We do keep uncompressed copies of the binaries around for 1 week for ease of access; after that we keep only the lightly compressed version. About once a month we have a script that extracts each .zip that the build process produces and 7-zips a whole month of build outputs together (which takes advantage of only having small differences per build).
An average day might have a dozen or two builds per project... The buildserver wakes up about every 5 minutes to check for relevant differences and builds. A full .7z on a large very active project for one month might be 7-10GiB, but it's certainly affordable.
For the most part, we've been able to diagnose everything this way. Occasionally there's a hiccup on the buildsystem and a file isn't actually a the revision it's supposed to be when a build happens, but there's usually enough evidence of this in the logs. Sometimes we have to dig out a tool that understands the debugging database format and feed it a few addresses to diagnose a crash (we have automatic stackdumps built into the product). But usually all the information needed is there.
We haven't had to crack the .7z archives yet, to mention. But we have the info there, and I have some interesting ideas on how to mine bits of useful data from it.
Save what can't be reproduced easily. I work on FPGAs where only the FPGA team have the tools and some cores (libraries) of the design are licensed to compile on only one machine. So we save the output bitstreams. But try to check them over one another rather than with a date/time/version stamp.
Save as in check in to source code control or just on disk? Save nothing to source code control. All derived files should be visible in the file system and available to developers. Don't checkin binaries, code generated from XML files, message digests etc. A separate packaging step will make these end products available. As you have the change number you can always reproduce the build if necessary assuming of course everything you need to do a build is completely in the tree and is available to all builds by syncing.
I would save your built binaries for exactly as long as they have a chance to go into production or be used by some other team (like a QA group). Once something has left production, what you do with it can vary a lot. For a lot of teams, they'll keep just their most recent prior build around (for rollback) and otherwise discard their builds.
Others have regulatory requirements to keep anything that went into production around for as long as seven years (banks). If you are a product company, I'd keep around any binary a customer might have installed in case a tech support guy wants to install the same version.

Resources