Patch vs. Hotfix vs. Maintenance Release vs. Service Pack vs - patch

When you are somewhere between version 1 and version 2, what do you do to maintain your software?
The terms Patch, Hotfix, Maintenance Release, Service Pack, and others are all blurry from my point of view, with different definitions depending on who you talk to.
What do you call your incremental maintenance efforts between releases?

When I hear those terms this is what comes to mind:
Patch - Publicly released update to
fix a known bug/issue
Hotfix - update to fix a very
specific issue, not always publicly
released
Maintenance Release - Incremental
update between service packs or
software versions to fix multiple
outstanding issues
Service Pack - Large Update that
fixes many outstanding issues,
normally includes all Patches,
Hotfixes, Maintenance releases that
predate the service pack
That being said that isn't how we do updates at all. We just increment the version and/or build number (which is based on the date) and just call it an "Update". For most software I find that easier, you can easily see that one computer is running 1.1.50 vs 1.2.25 and know which is newer.

A hotfix is a fix for a specific issue which is applied while the system is still active (hot). This comes from the older terms like hotswapping and hotswitching. Yes, the term is commonly misused these days by people not involved in the industry.

I'd like to point to http://semver.org/ for an attempt to define version numbers in sane manner, and the definitions given there actually fit closely to how I use version numbers (or how I wish I used them :))
As for the term definitions, I find patch and hotfix very similar, except "hotfix" is usually not broadcast if done to a service.
Maintenance Release and Service Pack fit fairly closely to the two denominations of version numbers. if you have a version number structure like X.Y.Z, Maintenance Release would be the Z, Service Pack would be the Y. I've really only heard these terms in big, corporate products, though. I'm more acquainted with the minor/mayor version terms.
Of course, every shop has their own use of the terms, and it depends on which type of user you're targeting. For end-users of MMOs, for instance, every update is a "patch" because the user has to "patch their client" to apply it, while for end-users of more common software, you often just have the term "update" and "new version" (new mayor version).

Related

Apigee X - Upgrade process?

From an architectural long term planning perspective, is there any information about updates / upgrades processes and/or timelines of ApigeeX around?
It is a SaaS, so I assume upgrades are seamless and backwards compatible applied automatically in the background, but without evidence that's obviously an assumption only. There seems to be no version / build number shown in the Apigee console either, but I am sure it is upgraded regularly, as there are release notes.

Branching and merging in TFS 2010

What is the best branching and merging strategy for a small development team making a lot of concurrent development on the same project?
We need to be able to easily apply a hotfix to a production release while other developers are working.
I was looking at the TFS Branching Guidance guide from codeplex and can't seem to figure out what is the best option for us.
Thanks
Without knowing your organization or how your team develops, it is hard (maybe impossible) to make a recommendation.
In our organization, the majority of our development is organized around releases, so we did a "branch on release" approach. That works great for us. We also do bug fixes, so we've implemented a "branch on feature" approach off of the production line for bug-fixes.
If you have different people all working on different features that might make it to production at different times, a "branch on feature" approach might work.
If you are all working on the same development line, a single "development" branch might work for you.
It took us months to finalize our branching strategy (for 14+ team projects, around 80 developers, and multiple applications). I don't expect that it will take quite as long for a smaller organization, but definitely spend some quality time thinking about this, and consider bringing in some outside expertise to give you guidance.
In addtion to Robaticus, you need to figure out why you want to do the parallel development.
In my opinion it is all about what you want to isolate
If you do parallel development because there are multiple features that are developed, it depends whether the features need to be released in the same package (version / release / give it a name) and whether you want the different features to don't integrate with each other during development. The latter is important when the different features interfere with each other and you only want to spend time on the integration when you merge both features.
If you need isolation on any of these reasons, you have within one release multiple branches which you merge before you do the integration test.
If you do parallel development because you want to support multiple releases, then you need to know how many releases you need to support (how many release that are rolled out in production do you need to support, how many release do you have in pre-production, etc.). In this case, the advise is to have a branch per release that you need to support
It sounds that you need to have this branching strategy for your organization.
It also depends on the type of files that you are branching. If you have SSIS or SSRS files (or any other XML-based files), binary files, or any files that are not easy text (in contrast to C#), then it is not easy to merge the files from two different branches. You then need to have some manual involvement to actually merge these files!
So as Robaticus already said, we need more information on your specific scenario to give more detailed guidance.

Drupal development workflow for teams

In my last Drupal project we were 5 people doing coding and installing new modules, at the same type our client was putting up content. Since we chose to have only one server for simplicity there were times were many people needed to write to the same files like style.css or page.tpl.php or when someones broken code would prevent others from working
Are there any best practises for a team that works with Drupal? How can leverage code repositories or sandboxes?
A single server may appear to give you "simplicity", but what it gives you, as you've experienced, is utter chaos -- and you were lucky if it didn't result in unpleasant and hard-to-reproduce, harder-to-fix crashes. Don't settle for anything less than a "production" server (where your client can be working -- on content only -- if they like minor risks;-) and a "staging" one (where anything from the development team goes to get tested and tried for a while before promotion to development, which is done at a quiet and ideally prearranged time).
Second, use a version control system of some kind. Which one matters less than using one at all: svn is popular and simple, the latest fashion (for excellent reasons) are distributed ones such as hg and git, Microsoft and other have commercial offerings in the field, etc.
The point is, whenever somebody's updating a file, they're doing so on their own client of the VCS. When a coherent set of changes is right, it's pushed to the VCS, and the VCS diagnoses and points out any "conflicts" (places where two developers may have made contradictory changes) so the developer who's currently pushing is responsible for editing the files and fixing the conflicts before their pushes are allowed to go through. Only then are "current versions" allowed to even go on the staging system for more thorough (and ideally automated!-) testing (or, better yet, a "continuous build" system).
Basically, there should be two layers of defense against such conflicts as you observed, and you seem to have deployed neither. They're both essential, though, if forced under duress to pick just one, I guess I'd reluctantly pick the distinction between production and staging servers -- development will still be chaotic (intolerably so compared to the simple solidity of any VCS!) but at least it won't directly hurt the actual serving system;-).
Here's a great writeup about development workflow in Drupal. It sums everything so far responded here and adds "Features", "Strongarm" and a few more tricks to the equation. http://www.lullabot.com/articles/site-development-workflow-keep-it-code

Advantages of a build server?

I am attempting to convince my colleagues to start using a build server and automated building for our Silverlight application. I have justified it on the grounds that we will catch integration errors more quickly, and will also always have a working dev copy of the system with the latest changes. But some still don't get it.
What are the most significant advantages of using a Build Server for your project?
There are more advantages than just finding compile errors earlier (which is significant):
Produce a full clean build for each check-in (or daily or however it's configured)
Produce consistent builds that are less likely to have just worked due to left-over artifacts from a previous build
Provide a history of which change actually broke a build
Provide a good mechanism for automating other related processes (like deploy to test computers)
Continuous integration reveals any problems in the big picture, as different teams/developers work in different parts of the code/application/system
Unit and integration tests ran with the each build go even deeper and expose problems that would maybe not be seen on the developer's workstation
Free coffees/candy/beer. When someone breaks the build, he/she makes it up for the other team members...
I think if you can convince your team members that there WILL be errors and integration problems that are not exposed during the development time, that should be enough.
And of course, you can tell them that the team will look ancient in the modern world if you don't run continuous builds :)
See Continuous Integration: Benefits of Continuous Integration :
On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk. My mind still floats back to that early software project I mentioned in my first paragraph. There they were at the end (they hoped) of a long project, yet with no real idea of how long it would be before they were done.
...
As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process. However I should stress that the degree of this benefit is directly tied to how good your test suite is. You should find that it's not too difficult to build a test suite that makes a noticeable difference. Usually, however, it takes a while before a team really gets to the low level of bugs that they have the potential to reach. Getting there means constantly working on and improving your tests.
If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.
From my personal experience, setting up a build server and implementing CI process, really changes the way the project is conducted. The act of producing a build becomes an uneventful everyday thing, because you literally do it every day. This allows you to catch things earlier and be more agile.
Also note that setting build server is only a part of the CI process, which includes setting up tests and ultimately automating the deployment (very useful).
Another side-effect benefit that often doen't get mentioned is that CI tools like CruiseControl.NET becomes the central issuer of all version numbers for all branches, including internal RCs. You could then enforce your team to always ship a build that came out of the CI tool, even if it's a custom version of the product.
Early warning of broken or incompatible code means that all conflicts are identified asap, thereby avoiding last minute chaos on the release date.
When your boss says "I need a copy of the latest code ASAP" you can get it to them in < 5 minutes.
You can make the build available to internal testers easily, and when they report a bug they can easily tell you "it was the April 01 nightly build" so that you can work with the same version of the source code.
You'll be sure that you have an automated way to build the code that doesn't rely on libraries / environment variables / scripts / etc. that are set up in developers' environments but hard to replicate by others who want to work with the code.
We have found the automatic VCS tagging of the exact code that produce a version very helpful in going back to a specific version to replicate an issue.
Integration is a blind spot
Integration often doesn't get any respect - "we just throw the binaries into an installer thingie". If ithis doesn't work, it's the installers fault.
Stable Build Environment
Prevents excuses such as "This error sometimes occurs when built on Joe's machine". Prevents using old dependent libraries accidentally when building on Mikes machine.
True dogfooding
You inhouse testers and users have a true customer experience. Your developers have a clear reference for reproducing errors.
My manager told us we needed to set them up for two major reasons. None were really to do with the final project but to make sure what is checked in or worked on is correct.
First to clean up DLL Hell. When someone builds on their local machine they can be pointing at any reference folder. Lots of projects were getting built with the wrong versions of dlls from someone not updating their local folder. In the build server it will always be built of the same source. All you have to do is get latest to get the latest references.
The second major thing for us was a way to support projects with little knowledge of them. Any developer can go grab the source and do a minor fix if required. They don't have to mess with hours of set up or finding references. We have an overseas team that works primarily on a project but if there is a rush fix we need to do during US hours we can grab latest and be able to build not have to worry about broken source or what didn't get checked in. Gated checkins save everyone else on your team time.

How do Microsoft (and other software companies with a large installed base) manage patch dependencies?

OS (usually security-based) patches and hotfixes that Microsoft releases to the community normally consist of, in my understanding, a series of updated DLLs or other binaries.
How does Microsoft, and other companies like it, ensure that that hotfixes don't clash with each other? Do they always go for a cumulative patch approach, where a single hotfix will includes all of the fixes in previous hotfixes? This doesn't seem to be the case, because many hotfixes seem to be focused on fixing specific problems. If they are focused hotfixes, how do they prevent one hotfix from trashing another one (e.g. incompatible DLLs being installed with each other).
I have always admired Microsoft's ability to manage this process. The company I work for is much smaller, and when I worked on the patch process a few years ago, we always went for the cumulative approach, where a single patch immediately superseded all previous patches based on that release. This meant that the patches got progressively larger in size, until the next "official" release came out.
What are some good practices for managing patch dependencies?
First off, Microsoft Windows Installer has the ability to patch binaries directly. Given known earlier states of a file, it can bring them to a known current state. We used to do this for our Large Commercial Product, but after a couple of releases, it was taking upwards of 24 hours for our four-way systems to produce a patch - which isn't good when you have (or want to have) nightly builds.
After a while, we opted for cumulative fixes where we merely allowed upgrades. We check that you're at a lower level, and then basically replace the entire product. (We also had the case whereby the second or third "delta" was basically everything anyway.)
On Unix/Linux, we can't use MSWI, obviously, so we provide another installer which basically does the same thing: move all the files out of the way, install as if brand new, and then delete the backup. The reality is, for us in our business, this is sufficient. We haven't gotten any complaints that I'm aware of (and those complaints would hit me pretty quickly based on my current job) with people unhappy enough to actually call in and complain. Mostly, they want to get the newer level with the patches so they can get on with their real business. Oddly enough, their business isn't installing patches.

Resources