How can we improve our deployment and build systems? - asp.net

We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.

If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.

Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.

Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.

One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.

So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.

What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.

I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.

If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.

Related

Drupal development workflow for teams

In my last Drupal project we were 5 people doing coding and installing new modules, at the same type our client was putting up content. Since we chose to have only one server for simplicity there were times were many people needed to write to the same files like style.css or page.tpl.php or when someones broken code would prevent others from working
Are there any best practises for a team that works with Drupal? How can leverage code repositories or sandboxes?
A single server may appear to give you "simplicity", but what it gives you, as you've experienced, is utter chaos -- and you were lucky if it didn't result in unpleasant and hard-to-reproduce, harder-to-fix crashes. Don't settle for anything less than a "production" server (where your client can be working -- on content only -- if they like minor risks;-) and a "staging" one (where anything from the development team goes to get tested and tried for a while before promotion to development, which is done at a quiet and ideally prearranged time).
Second, use a version control system of some kind. Which one matters less than using one at all: svn is popular and simple, the latest fashion (for excellent reasons) are distributed ones such as hg and git, Microsoft and other have commercial offerings in the field, etc.
The point is, whenever somebody's updating a file, they're doing so on their own client of the VCS. When a coherent set of changes is right, it's pushed to the VCS, and the VCS diagnoses and points out any "conflicts" (places where two developers may have made contradictory changes) so the developer who's currently pushing is responsible for editing the files and fixing the conflicts before their pushes are allowed to go through. Only then are "current versions" allowed to even go on the staging system for more thorough (and ideally automated!-) testing (or, better yet, a "continuous build" system).
Basically, there should be two layers of defense against such conflicts as you observed, and you seem to have deployed neither. They're both essential, though, if forced under duress to pick just one, I guess I'd reluctantly pick the distinction between production and staging servers -- development will still be chaotic (intolerably so compared to the simple solidity of any VCS!) but at least it won't directly hurt the actual serving system;-).
Here's a great writeup about development workflow in Drupal. It sums everything so far responded here and adds "Features", "Strongarm" and a few more tricks to the equation. http://www.lullabot.com/articles/site-development-workflow-keep-it-code

Advantages of a build server?

I am attempting to convince my colleagues to start using a build server and automated building for our Silverlight application. I have justified it on the grounds that we will catch integration errors more quickly, and will also always have a working dev copy of the system with the latest changes. But some still don't get it.
What are the most significant advantages of using a Build Server for your project?
There are more advantages than just finding compile errors earlier (which is significant):
Produce a full clean build for each check-in (or daily or however it's configured)
Produce consistent builds that are less likely to have just worked due to left-over artifacts from a previous build
Provide a history of which change actually broke a build
Provide a good mechanism for automating other related processes (like deploy to test computers)
Continuous integration reveals any problems in the big picture, as different teams/developers work in different parts of the code/application/system
Unit and integration tests ran with the each build go even deeper and expose problems that would maybe not be seen on the developer's workstation
Free coffees/candy/beer. When someone breaks the build, he/she makes it up for the other team members...
I think if you can convince your team members that there WILL be errors and integration problems that are not exposed during the development time, that should be enough.
And of course, you can tell them that the team will look ancient in the modern world if you don't run continuous builds :)
See Continuous Integration: Benefits of Continuous Integration :
On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk. My mind still floats back to that early software project I mentioned in my first paragraph. There they were at the end (they hoped) of a long project, yet with no real idea of how long it would be before they were done.
...
As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process. However I should stress that the degree of this benefit is directly tied to how good your test suite is. You should find that it's not too difficult to build a test suite that makes a noticeable difference. Usually, however, it takes a while before a team really gets to the low level of bugs that they have the potential to reach. Getting there means constantly working on and improving your tests.
If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.
From my personal experience, setting up a build server and implementing CI process, really changes the way the project is conducted. The act of producing a build becomes an uneventful everyday thing, because you literally do it every day. This allows you to catch things earlier and be more agile.
Also note that setting build server is only a part of the CI process, which includes setting up tests and ultimately automating the deployment (very useful).
Another side-effect benefit that often doen't get mentioned is that CI tools like CruiseControl.NET becomes the central issuer of all version numbers for all branches, including internal RCs. You could then enforce your team to always ship a build that came out of the CI tool, even if it's a custom version of the product.
Early warning of broken or incompatible code means that all conflicts are identified asap, thereby avoiding last minute chaos on the release date.
When your boss says "I need a copy of the latest code ASAP" you can get it to them in < 5 minutes.
You can make the build available to internal testers easily, and when they report a bug they can easily tell you "it was the April 01 nightly build" so that you can work with the same version of the source code.
You'll be sure that you have an automated way to build the code that doesn't rely on libraries / environment variables / scripts / etc. that are set up in developers' environments but hard to replicate by others who want to work with the code.
We have found the automatic VCS tagging of the exact code that produce a version very helpful in going back to a specific version to replicate an issue.
Integration is a blind spot
Integration often doesn't get any respect - "we just throw the binaries into an installer thingie". If ithis doesn't work, it's the installers fault.
Stable Build Environment
Prevents excuses such as "This error sometimes occurs when built on Joe's machine". Prevents using old dependent libraries accidentally when building on Mikes machine.
True dogfooding
You inhouse testers and users have a true customer experience. Your developers have a clear reference for reproducing errors.
My manager told us we needed to set them up for two major reasons. None were really to do with the final project but to make sure what is checked in or worked on is correct.
First to clean up DLL Hell. When someone builds on their local machine they can be pointing at any reference folder. Lots of projects were getting built with the wrong versions of dlls from someone not updating their local folder. In the build server it will always be built of the same source. All you have to do is get latest to get the latest references.
The second major thing for us was a way to support projects with little knowledge of them. Any developer can go grab the source and do a minor fix if required. They don't have to mess with hours of set up or finding references. We have an overseas team that works primarily on a project but if there is a rush fix we need to do during US hours we can grab latest and be able to build not have to worry about broken source or what didn't get checked in. Gated checkins save everyone else on your team time.

Forbid developer to commit code because of making weekly build

Our development team (about 40 developers) has a formal build every two weeks. We have a process that in the "build day", every developers are forbiden to commit code into SVN. I don't think this is a good idea because:
Build will take days (even weeks in bad time) to make and BVT.
People couldn't commit code as they will, they will not work.
People will commit all codes in a huge pack, so the common is hard to write.
I want know if your team has same policy, and if not how do you take this situation.
Thanks
Pick a revision.
Check out the code from that revision.
Build.
???
Profit.
Normally, a build is made from a labeled code.
If the label is defined (and do not move), every developer can commit as much as he/she wants: the build will go on from a fixed and defined set of code.
If fixes need to be make on that set of code being built, a branch can then be defined from that label, minor fixes can be made to achieve a correct build, before being merging back to the current development branch.
A "development effort" (like a build with its tweaks) should not ever block another development effort (the daily commits).
Step 1: svn copy /trunk/your/project/goes/here /temp/build
Step 2: Massage your sources in /temp/build
Step 3: Perform the build in /temp/build. If you encounter errors, fix them in /temp/build and build again
Step 4: If successful, svn move /temp/build /builds/product/buildnumber
This way, developers can check in whenever they want and are not disturbed by the daily/weekly/monthly/yearly build
Sounds frustrating. Is there a reason you guys are not doing Continuous Integration?
If that sounds too extreme for you, then definitely invest some time in learning how branching works in SVN. I think you could convince the team to either develop on branches and merge into trunk, or else commit the "formal build" to a particular tag/branch.
We create a branch for every ticket or new feature, even if the ticket is small (eg takes only 2 hours to fix).
At the end of each the coding part of each iteration we decide what tickets to include in the next release. We then merge those tickets into trunk and release software.
There are other steps within that process where testing is performed by another developer on each ticket branch before the ticket is merged to trunk.
Developers can always code by creating their own branch from trunk at any time. Note we are a small team with only 12 developers.
Both Kevin and VonC have well pointed out that the build should be made from a specific revision of the code and should not ever block the developers from committing in new code. If this is somehow a problem, then you should consider using another version management software which uses centralized AND local repositories. For example, in mercurial, there is a central repository just like in svn, but developers also have a local repository. This means that when a developer makes a commit, he only commits to his local repository and the changes will not be seen by other developers. Once he is ready to commit the code for other developers, then the developer just pushes the changes from his local repository to the centralized repository.
The advantage with this kind of an approach is that developers can commit smaller pieces of code, even if it would break a build, because the changes are only applied to the local repository. Once the changes are stable enough, they can be pushed to the centralized repository. This way a developer can have the advatange of source control even though the centralized repository would be down.
Oh, and you'll be looking at branches in a whole new way.
If you became interested in mercurial, check out this site: http://hginit.com
I have worked on projects with a similar policy. The reason we needed such a policy is that we were not using branches. If developers are allowed to create a branch, then they can make whatever commits they need to on that branch and not interrupt anyone else -- the policy becomes "don't merge to main" during the weekly-build period.
Another approach is to have the weekly-build split off onto a branch, so that regardless of what gets checked in (and possibly merged), the weekly build will not be affected.
Using labels, as VonC suggested, is also a good approach. However, you need to consider what happens when a labeled file needs a patch for the nightly build -- what if a developer has checked in a change to that file since it was labeled, and that developer's changes should not be included in the weekly build? In that case, you will need a branch anyway. But branching off a label can be a good approach too.
I have also worked on projects that make branches like crazy and it becomes a mess trying to figure out what's happening with any particular file. Changes may be committed to multiple branches in the same timeframe. Eventually the merge conflicts need to be resolved. This can be quite a headache. Regardless, my preference is to be able to use branches.
Wow, thats an awful way to develop.
Last time I worked in a really large team we had about 100 devs in 3 time zones: USA, UK, India, so we could effectively have 24 hour development.
Each dev would check the build tree and work on what they had to work on.
At the same time, there would be continuous builds happening. The build would make its copy from the submitted code and build it. Any failures would go back to the most recent submitter(s) for code for that build.
Result:
Lots of builds, most of which compiled OK. These builds then started automatic smoke testing scenarios to find any unexpected bugs not found during testing priot to commiting.
Build failures found early, fixed early.
Bugs found early, fixed early.
Developers only wait the minimum time to submit (they have to wait until any other dev that is submitting has finished submitting - this requirement made so that the build servers have a point at which they can grab the source tree for a new build).
Most devs had two machines so they could work on a second bug while running their tests on the other machine (the tests were very graphical and would cause all sorts of focus issues, so you really needed a different machine to do other work).
Highly productive, continuous development with no deadtime as in your scenario.
To be fair, I don't think I could work in place that you describe. It would be soul destroying to work in such an unproductive way.
I strongly believe that your organization would benefit from Continuous Integrations, where you build very often, perhaps for every checkin to your code base.
Don't know if i'll get shot for saying this, but you should really move to a decentralized solution like GIT. SVN is horrible about this and the fact that you can't commit basically stops people from working properly. At 40 people this is worth it because each can continue working on their own stuff and only push what they want. The build server can do what it wants and build without affecting everyone.
Yet another example why Linus was right when saying that in almost all cases, a decentralized solution like git works best in real life teams.

Who writes the automated UI tests? Developers or Testers?

We're in the initial stages of a large project, and have decided that some form of automated UI testing is likely going to be useful for us, but have not yet sorted out exactly how this is going to work...
The primary goal is to automate a basic install and run-through of the app, so if a developer causes a major breakage (eg: app won't install, network won't connect, window won't display, etc) the testers don't have to waste their time (and get annoyed by) installing and configuring a broken build
A secondary goal is to help testers when dealing with repetitive tasks.
My question is: Who should create these kinds of tests? The implicit assumption in our team has been that the testers will do it, but everything I've read on the net always seems to imply that the developers will create them, as a kind of 'extended unit test'.
Some thoughts:
The developers seem to be in a much better position to do this, given that they know control ID's, classes, etc, and have a much better picture of how the app is working
The testers have the advantage of NOT knowing how the app is working, and hence can produce tests which may be much more useful
I've written some initial scripts using IronRuby and White. This has worked really well, and is powerful enough to do literally anything, but then you need to be able to write code to write the UI tests
All of the automated UI test tools we've tried (TestComplete, etc) seem to be incredibly complex and fragile, and while the testers can use them, it takes them about 100 times longer and they're constantly running into "accidental complexity" caused by the UI test tools.
Our testers can't code, and while they're plenty smart, all I got were funny looks when I suggested that testers could potentially write simple ruby scripts (even though said scripts are about 100x easier to read and write than the mangled mess of buttons and datagrids that seems to be the standard for automated UI test tools).
I'd really appreciate any feedback from others who have tried UI automation in a team of both developers and testers. Who did what, and did it work well? Thanks in advance!
Edit: The application in question is a C# WPF "rich client" application which connects to a server using WCF
Ideally it should really be QA who end up writing the tests. The problem with using a programmatic solution is the learning curve involved in getting the QA people up to speed with using the tool. Developers can certainly help with this learning curve and help the process by mentoring, but it still takes time and is a drag on development.
The alternative is to use a simple GUI tool which backs a language (and data scripts) and enables QA to build scripts visually, delving into the finer details of the language only when really necessary - development can also get involved here also.
The most successful attempts I've seen have definitely been with the latter, but setting this up is the hard part. Selenium has worked well for simple web applications and simple threads through the application. JMeter also (for scripted web conversations for web services) has worked well... Another option which is that of in house built test harness - a simple tool over the top of a scripting language (Groovy, Python, Ruby) that allows QA to put test data into the application either via a GUI or via data files. The data files can be simple properties files, or in more complex cases structured (something like YAML or even Excel) data files. That way they can build the basic smoke tests to start, and later expand that into various scenario driven tests.
Finally... I think rich client apps are way more difficult to test in this way, but it depends on the nature of the language and the tools available to you...
In my experience, testers who can code will switch jobs for a pay raise as developers.
I agree with you on the automated UI testing tools. Every place I've worked that was rich enough to afford WinRunner or LoadRunner couldn't afford the staff to actually use it. The prices may have changed, but back then, these were in the high 5-digit to low 6-digit price tags (think of the price of a starter home). The products were hard to use, and were usually kept uninstalled in a locked cabinet because everyone was afraid of getting in trouble for breaking them.
I worked over 7 years as an application developer before I finally switched to testing and test automation. Testing is much more challenging than coding, and any automation developer who wants to succeed should master testing skills.
Some time ago I put my thoughts on skill matrices in a couple of blog posts.
If interested to discuss:
http://automation-beyond.com/2009/05/28/qa-automation-skill-matrices/
Thanks.
I think having the developers write the tests will be of the most use. That way, you can get "breakage checking" throughout your dev cycle, not just at the end. If you do nightly automated builds, you can catch and fix bugs when they're small, before they grow into huge, mean, man-eating bugs.
What about the testers proposing the tests, and the developers actually writing it ?
I believe at first it largely depends on the tools you use.
Our company currently uses Selenium (We're a Java shop).
The Selenium IDE (which records actions in Firefox) works OK, but developers need to manually correct mistakes it makes against our webapps, so it's not really appropriate for QA to write tests with.
One thing I tried in the past (with some success), was to write library functions as wrappers for Selenium functions. They read as plain english:
selenium.clickButton("Button Text")
...but behind the scenes check for proper layout and tags on the button, has an id etc.
Unfortunately this required a lot of set up to allow easy writing of tests.
I recently became aware of a tool called Twist (from Thoughtworks, built on the Eclipse engine), which is a wrapper for Selenium, allowing plain English style tests to be written. I am hoping to be able to supply this to the testers, who can write simple assertions in plain English!
It automatically creates stubs for new assertions too, so the testers could write the tests, and pass them to developers if they need new code.
I've found the most reasonable option is to have enough specs such that the QA folks can stub out the test, basically figure out what they want to test at each 'screen' or on each component, and stub those out. The stubs should be named such that they're very descriptive as to what they're testing. This also offers a way to crystalize functional requirements. In fact, doing the requirements in this fashion are particularly easy, and help non-technical people really work through the muddy waters of their own though process.
The stubs can be filled in via a combination of QA/dev people. This allows you to CHEAPLY train QA people as to how to write tests, and they typically slurp it up as it furthers their job security.
I think it depends mostly on the skill level of your test team, the tools available, and the team culture with respect to how developers and testers interact with each other. My current situation is that we have a relatively technical test team. All testers are expected to have development skills. In our case, testers write UI Automation. If your test team doesn't have those skills they will not be set up for success. In that case, it may be best for developers to write you UI automation.
Other factors to consider:
What other testing tasks are on the testers' plate?
Who are your customers and what are their expectations related to quality?
What is the skill level of the development team and what is their willingness to take on test automation work?
-Ron

Is continuous integration worth it for small projects?

I've been pushing for continuous integration at my company since I joined 5 months ago, but having seen the type of applications we work on I'm starting to think that it might not be worth the effort of setting up each and every project for continuous integration.
If you work in a development department where the average project takes 2-3 weeks and once it's deployed you seldom if ever have to worry about it, is continuous integration worth the hassle of setting it up?
Probably depends on your process. If you have unit tests that cover your code, then continuous integration is worth every bit. I'm assuming that you guys all work on a single module of work as the projects are 2-3 weeks.
I don't think folks will run every test for every one of their commits and continuous integration helps a lot here.
The other reason would be if your project is highly modularized. I've worked in systems where there are lots of modules and a developer wouldn't be functional-testing the entire website before committing. Things might not even compile properly as the other module wouldn't even build because the developer did not checkout the complete code.
I'd recommend continuous integration anyway. With setups like Hudson and Cruisecontrol, it doesn't take a lot of time to set up and pays for itself quickly.
Personally, I think CI and the various processes it encourages are always useful. Getting CI setup is rather trivial once you have the server set up itself. You're basically just copying a configuration file from one project, editing it, and creating a new project. I wouldn't not use CI because of the "effort of setting up each and every project".
Continuous Integration is not only a tool matter, but also a set processes (commit regularly, have a version control system...).
Concerning the I.C software, you can install, configure and start to use Hudson in less than 10 minutes! So why would you go without any IC system?
It really depends on how fast you can set up an automated build and then get that hooked up to a CI server.
.NET
I've seen us go from no automated build to an automated build for a project in a few minutes using UppercuT. We use that and CruiseControl.NET (in the configuration, we add a line per project b/c we take advantage of the preprocessor).
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
If your many apps share any common components or modules, CI and tests will likely help you notice something breaking. If they are really all little throw away, self contained scripts then you might not need CI, but it's a tough call.
As others have pointed out, once CI is setup then adding a new project is trivial so I'd say go for it. One benefit you're going to see is that if any of your projects do ever change you've already got CI and hopefully your unit tests ready to go so you won't get any nasty surprises!
Continous Integration is not only ensuring stuff works, it also allows you to DOCUMENT and TEST the release process as it would be done by a new developer.
This ensures that the customer gets what has been tested, not what the developer just throws together from his harddisk.
This may be extremely important for maintainance purposes.

Resources