git build number c# - asp.net

I'm trying to embed git describe-generated version info into AssemblyInfo.cs plus some label within ASP.NET website.
I already tried using git-vs-versionino but this assumes Git executable on PATH. However default install of msysgit on Windows does not set this up; it uses git bash. This caused problems.
Now I am looking for a way to utilize libgit2sharp library (for zero external dependencies) to use as build number generator. However this library has no describe command...
Thanks!

git-describe is a UI feature that nobody has implemented in the library or bindings yet (or at least nobody's contributed it), but you can do it yourself fairly easily.
You get a list of the tags and what commits they point to, do a walk down the commits and count how many steps it took to get to a commit that you have in the list you built. This already gives you the information you need. If the steps were zero, then your description would be the tag name only; otherwise you append the number of steps and the current commit's id to it.

There's a work in progress libgit2 pull request that proposes an implementation of git-describe functionalities.
See #1066 for more information.
It's not finished yet. Make sure to subscribe to it in order to be notified of its future progress.
Once it's done, it should be quite easy to bind it and make it available through LibGit2Sharp.

Related

do_package() before do_compile() possible?

Is it possible to have do_package() before do_compile() in a bitbake recipe?
In this case, binaries from the previous bitbake run would be deployed. Would bitbake have warning about this situation?
I don't know for sure if it is possible to change the order of some built-in tasks, but given your sentence "in this case, binaries from the previous bitbake run would be deployed", I think it is not the proper way to archive that.
Yocto/Bitbake supports incremental build and package dependencies so that if a local artifact is still valid it won't be rebuild.
In Yocto, you can also setup a shared folder (on a server) to store build artifacts so that even when building "from scratch" from a known state, will actually results in downloading the artifacts instead of building them.
You can look at https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#shared-state-cache to get more info on that.
Note: if you want/need to try it anyway, maybe adding a custom task using add task custompackage before do_compile would be a way to investigate it.
Otherwise, to use built-in names, I would try to deltask those, and recreate them with custom ordering, but once again, that looks risky to me.

Seeing what tickets were closed between two builds

How can I find out which tickets were closed between one build and the previous stable build? I'm trying to design a new build process, so I'm not set on particular tools yet. Which ones would let me see this sort of info in a dashboard, if any? Should I try to do this from a bug tracker, or from a build pipeline such as Jenkins or Bamboo, or somewhere else?
A possible set-up is to:
include the bug-tracker issue ID in your commit messages in your SCM ("[MYPROJECT-12923] add this new option in that nice feature")
launch your build with Jenkins that retrieves the source code from your SCM. Jenkins will show you a "Recent Changes" label linking to a page where you will find the commits that took place between last build and the current one. The commit messages will include the list of issues ID included in the build.
Note: that maybe does not answer your question perfectly because those commits could be intermediate one. Depends also on how granular are the commits.
In our DEV team, all the commits have the JIRA number in them (This is enforced by a plugin called TicketIt in Stash . There are various other plugins available for different repositories) . When we run a build , all commits that are part of the build are aggregrated by teamcity and displayed on a tab called issues . This solution that I am proposing works in teamcity and bamboo . I am sure thee would be some plugin with Jenkins for the same.
A hackier way would be to get the start time of the last build (x) and the current build(y) and get all JIRA tickets that were closed during this time via JIRA API. This might not be a foolproof method if your JIRA's are not always closed before a build

DVCS - How often and when to commit changes

There is another thread here on StackOverflow, dealing wih how often to commit changes to source control. I want to put that in the context of using a DVCS like git or mercurial.
How often and when do you commit?
Do you only commit changes when they
build correctly?
How often and when do you push your changes (or file a pull request or similar)?
How do you approac developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?
It depends on the nature of the branch ("line of development") you are working on.
The main advantage with those DVCS (git or mercurial) is the ease you can:
branch
merge
So:
1/ How often and when do you commit?
2/ Do you only commit changes when they build correctly?
As many time as necessary on a private branch (for instance, if it compiles).
The practice to only commit if unit tests pass is a good one, but should only apply to an "official" (as in "could be published or 'pushed'") branch: in your private branch, you merge a gazillon times if you need to.
The only thing is: do some merge --interactive to reorganize your many commits on your private branch, before replaying them on your main development branch, where you can pass some tests.
3/ How often and when do you push your changes (or file a pull request or similar)?
Publication is another matter and should be done with a "clear" history (coherent merges, representing a content which compile and pass some tests).
The branch you publish should be one where the history is never rewritten, always updated.
The pace of the publications depends on the nature of the remote branch and of the population pulling that branch. For instance, if it is for another team, you could push quite often. If it is for a system-wide integration testing team, you will push a lot less often.
4/ How do you approach developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?
See 1. and 2.: patch first in your own private branch, then reorganize your commits on an official (published) patch branch. One single commit is not always the best option if the patch involves several different "activities" (or bug fix).
I'd commit changes to my DVCS (my own topic or task branch) very, very often, this way I can use it not only for "delivering changes" but also to help me while I work: like "why this was working 5 minutes ago and it's not working anymore?" If you commit often you can just run a diff.
Also, a technique I found very, very good is using it to "self-document refactors". Let me explain: if you've to do a big refactor on a topic branch and then review the change as a whole (having modified a nice set of files), you'd probably get lost. But, suppose you checkin on every "intermediate step" and document it with a comment, then you're creating some sort of "movie" of your own changes helping to describe what you've done! Huge for reviewers.
I commit a lot; when adding functions or even reformatting my sources.
I use git and do most of my work on non-shared branches. And when I've added enough little changes that count as a block, I use git rebase to collect the smaller related changes into larger chunks and commit that to the main branches.
This way, I have all the advantages of committed code that I can go backwards or forwards in, but I don't have to commit all my mistakes, and bactracks to the main history.
How often and when do you commit?
Very frequently. It could be as much as a few times in a hour, if the changes I've made work and make up a nice patch. Or it could be every few hours, depending on whether I am spending longer debugging things, or experimenting with risky changes.
Do you only commit changes when they build correctly?
Yes, almost always. I can't think of a reason right to check in code that didn't build correctly. There's plenty of reasons you might check in code that doesn't run correctly (in a branch) though.
How often and when do you push your changes (or file a pull request or similar)?
Normally only when a feature is complete and ready for integration testing. That means it has passed unit tests and other relevant tests, and the code/patch is clean enough that I consider it ready for review.
How do you approac developing a complex feature / doing a complex refactoring requiring many places to be touched? Are "private commits" that won't build ok? When finished, do you push them also to the master repository or do you bundle all your changes into a single changeset before pushing?
I would create a named branch for the feature (which would have traceability across design docs and Issue Tracking system). Commits that don't build would only really be ok on a private branch as an intermediate step, but would still be exceptional. Currently I don't rebase and merge the entire feature branch into a single changeset, though it is something I'm looking at doing in future. Changes are only pushed when appropriate tests are all passed.
I follow this kind of flow
alt text http://img121.imageshack.us/img121/3272/versioncontrolsysbestpr.png
(Here is the original image url)
I guess this says pretty everything. Basically I would do a check-in after implementing a full working use case / user story (depends on your software process). The major important thing is that you check-in things that work in the sense that they compile. Never break the build!
Doing a commit after each user story/use case has the advantage that you have a better tracking of past versions and undoing changes is much easier.

With subversion or TFS, how would you go about setting up automatic builds?

With subversion or TFS, how would you go about setting up automatic builds?
I need some guidance with regards to naming convention and how this would happen automatically.
I am using /branches /trunk /tags folder structure.
I am using a build app (finalbuilder).
Which tag name would I tell it to pull from (or revision # etc)? Since it is going to change all the time, how do people perform nightly builds? Using the date in the name of the release?
Just use the revision number. Something like CruiseControl.NET should make this pretty easy for you.
Use TeamCity, setup a separate build for trunk + every branch. We do this and it's very helpful.
I would set up the build server to monitor the /trunk folder and trigger a build whenever anything is commited there. If wanted, you could have the build script end with creating a tag for the build (even though that might be a bit ambitious, depending on how often things are commited to the trunk). When I have done that I have usually included the subversion revision number in the tag name, and also in the version number of the files (to the extent that this is applicable).
You should be able to pull right from the /trunk (and possibly with other nightly builds from branches that you think are important). It's not particularly useful to do a nightly build from a tag, since generally tags are static. When it is checked out, you can identify the checkout by the revision number that was checked out. That way if you ever need to find out what has changed since then, you can diff from that revision (or branch, whatever).
We use Hudson, which checks periodically (set by you) for changes to whatever svn path you give it. It then has the ability to run a shell script (we are building for iPhone so use xcodebuild, but you could use whatever is used for ASP.net). We then upload the results of this to our local server under $REVISION. It would be easy to run automated tests in this as well.
Since you are asking about TFS:
We are using a CommonAssemblyInfo to increment the version of the dlls. Nightly Builds are typically from a trunk.
We have a Main-Folder from which a "Dev" Folder is branched for the current release. We make nightly builds from the current Dev-branch and manual, so called reference builds once we merge Dev-stuff back into Main.
Builds are defined via the Build Agent stuff. Custom Tasks like incrementing version number enter the game via MSBuild.

Improving Your Build Process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Resources