Best way to manage generated code in an automated build? - build-process

In my automated NAnt build we have a step that generates a lot of code off of the database (using SubSonic) and the code is separated into folders that match the schema name in the database. For example:
/generated-code
/dbo
SomeTable.cs
OtherTable.cs
/abc
Customer.cs
Order.cs
The schema names are there to isolate the generated classes that an app will need. For example, there is an ABC app, that will pull in the generated code from this central folder. I'm doing that on a pre-build event, like this:
del /F /Q $(ProjectDir)Entities\generated*.cs
copy $(ProjectDir)....\generated-code\abc*.cs $(ProjectDir)Entities\generated*.cs
So on every build, the Nant script runs the generator which puts all the code into a central holding place, then it kicks off the solution build... which includes pre-build events for each of the projects that need their generated classes.
So here's the friction I'm seeing:
1) Each new app needs to setup this pre-build event. It kind of sucks to have to do this.
2) In our build server we don't generate code, so I actually have an IF $(ConfigurationName) == "Debug" before each of those commands, so it doens't happen for release builds
3) Sometimes the commands fail, which fails our local build. It will fail if:
- there is no generated code yet (just setting up a new project, no database yet)
- there is no existing code in the directory (first build)
usually these are minor fixes and we've just hacked our way to getting a new project or a new machine up and running with the build, but it's preventing me from my 1-click-build Nirvana.
So I'd like to hear suggestions on how to improve this where it's a bit more durable. Maybe move the copying of the code into the application folders into the NAnt script? This seems kind of backwards to me, but I'm willing to listen to arguments for it.
OK, fire away :)

How often does your DB schema change? Wouldn't it be possible to generate the database-related files on demand (e.g. when the schema changes) and then check them into your code repository?
If your database schema doesn't change, you can also package the compiled *.cs classes and distribute the archive to other projects.

We have two projects in our solution that are built completely out of generated code. Basically, we run the code generator .exe as a post-build step for another project and along with generating the code, it automates the active instance of visual studio to make sure that the generated project is in the solution, that it has all of the generated code files, and that they are checked out/added to TFS as necessary.
It very rarely flakes out during the VS automation stage, and we have to run it "by hand" but that's usually only if you have several instances of VS open with >1 instance of the solution open and it can't figure out which one it's supposed to automate.
Our solution and process are such that the generation should always be done and correct before our auto-build gets to it, so this approach might not work for you.

Yeah I'd like to take VS out of the equation so that a build from VS is just simply compiling the code and references.
I can manage the NAnt script... I'm just wondering if people have advice around having 1 NAnt script, or possibly one for each project which can push the code into the projects rather than being pulled.
This does mean that you have to opt-in to generate code.

Related

Build Process in TFS 2010

I have read articles on build automation and it looks simple, but I am really not sure about parameterized build. I believe, there must be a xml file for that.
When we say build is automated, I believe it means our code/binaries sit in test environemnt. And all application related settings will also configured just by simple clicks of build, and push.
What are the required tools? What is MSBuild ?
Please put some light on it.
MSBuild is and exe that you run with command line tools and pass to it the project file (.csproj) which is an XML file as you said and it has all the instructions needed as you configured.
I created a series of videos that describe how to create simple MSBuild tasks and how to organize tasks and so on, for more info click on the following link:
MSBuild Tutorial
MSBuild is exe
When you run MSBuild from Command line
You will need to unload the projct so you can edit the (.csproj) or project file
The (.csproj) or project file
You are asking about build automation, and by your tags you menation TFS 2010 if so then you only need a cursory understanding of msbuild to get started. It is what eventually calls the compiler, but in all hoensty you need the step above it which are the build templates and defintions, along with how to set up agents and controller.
Here is a good overview document by Martin Woodward, this should give you enough to figure things out, or ask more specific questions

How to resolve website references with MSBuild without building website?

I have a solution that contains web-site and couple of dependent projects. I need to build this solution with MSBuild. The issue is that I need to build site itself only to resolve references and then just throw away results of build. I've taken a look on the solution .metaproj file, but it only contains target that allows me to build site. I'm using it, as it also resolves that references. It's not a critical issue, but in my case it takes two minutes of total fife to build that site itself.
Sure I can build dependent projects manually and then just copy build results... But every time new reference is added it will require modifications of build file.
So is there smart way doing this?
Just build it. Consider building it in parallel with /m, but other than writing your own half-compiler, you should just let the build system do this for you.

Structure of NAnt build scripts and solution structure on build server

We're in the process of streamlining/automating build, integration and unit testing as well as deployment.
Our software is developed in Visual Studio where we have use both C# and VB.NET in our projects. A single project can be contained within multiple solutions (i.e. Utils project is used in both ProductA and ProductB solutions)
For historical reasons our code repository isn't as well structured as one could have hoped for.
E.g. Utils project might be located under ProductA solution (because that's were it was first used) but it was later deemed useful for productB development and merely just included into the solution of productB (but still located in a subdirectory of productA).
I would like to use continous integration testing and have setup a CC.NET build server where I intend to use NAnt for creating the actual builds.
Question 1: How should I structure my builds on the buildserver? Should I instruct CC.NET to retrieve all the projects for productB into a single library e.g. a file structure similar to
-ProductB
--Utils
--BetterUtils
--Data
or should I opt for a filestructure similar to this
-ProductA
--Utils
-ProductB
--BetterUtils
--Data
and then just have the NAnt build scripts handle the references? Our references in VS doesn't match the actual location in the code repository so it's not possible today to just check-out productB solution and build it straight away (unfortunately). I hope this question makes sense?
Question 2: Is it better to check out all the source code located in different projects into a single file folder (whilst retaining some kind of structure) and then build every thing at once or have multiple projects in CC.NET and then let the CC.NET server handle dependencies?
Example:
Should I have a seperate project in CC.NET for monitoring the automated build/test of Utils project when it's never released on it's own? Or should I just build/test it whilst building it as part of ProductB?
I hope the above makes sense and that you can provide me with some arguments for using either option. We're nowhere near an ideal source code repository structure and I would prefer if I can resolve the lack of repository structure on the build server instead of having to clean up the structure of our repository.
Switching away from VSS is (unfortunately) not an option.
Right now our build consists of either deploying via VS clickonce or pressing F5 so just getting the build automated would be a huge step up for us.
Thanks
To answer your first question, I would recommend a separate top-level folder for each build project. The problem with having a single tree matching your source repository is that when your build server is trying to run multiple builds at once, one or more will likely fail due to files in use by other processes. Also, you may run into cases where a build script is pulling an older version of the code. In that instance you don't want a different project to accidentally use the incorrect source version.
If your solutions already reference projects from relative paths, you may end up with a structure like this:
-CCNetBuilds
--ProductASource
---Utils
---...
--ProductBSource
---ProductA
----Utils
---ProductB
----BetterUtils
----Data
In this case, the build for Product B contains part of the Product A source, at the same relative path as your solution already expects. This takes a bit more time to set up in CC.Net, but makes it easier to maintain if the developers have their code set up this way on their machines. The same solution files used in development are used by the build server.
To answer your second question, I prefer Utilities being its own build. If I have unit tests on my Utilities assembly, I would not want them to run for every single product that uses the Utilities. Also, if you have a separate build for Utilities, you can set a dependency in CC.Net so that Product A and B will not attempt to build if the Utilities build is broken. This provides a bit faster feedback that something is wrong.

ASP.NET MVC: How should it work with subversion?

So, I have an asp.net mvc app that is being worked on by multiple developers in differing capacities. This is our first time working on a mvc app and my first time working with .NET. Our app does not have a lot of unit tests in it...
The problem we are having is trying to keep each other from overwriting each others changes. For example:
Two developers are both working on the app and Jon (not his real name) makes a change to a controller, compiles a new dll, and checks in his stuff (both the controller and the dll.) Our svn system automatically updates our DEV server with the changes that Jon just made.
Clyde (also not a real name) also makes a change right about the same time but did not update the code with Jon's change and commits a new dll thereby "forgetting" about Jon's change.
This happens a lot. The question I'm asking is more of a workflow question - how do we solve this issue? Is it just a matter of Clyde needing to be more careful? Can anybody recommend a decent process for us to use?
You don't check in the DLL's. Exclude the bin folder from Subversion in its entirety. It's the .cs files that matter and that will be compiled locally on every computer that checks out the code from Subversion. If your deployment script don't compile the code but is just a simple xcopy statement, you need to either introduce csc to the script or implement a continuous integration system like TeamCity.
The issue you describe is already handled by subversion. When Clyde tries to commit his changes subversion will detect the conflict and offer him the possibility to merge his changes.
This is exactly the scenario that Subversion and other version control systems are designed to avoid. When Clyde checks in, he should get an "out-of-date" error and his commit should fail, thereby forcing him to update his working copy and get Jon's changes before he can commit his own.
Check out the SVN video tutorials from dime casts. These show you best practices like how to setup your project, and how to do the "check in dance" which will avoid the situation you ran into/
http://www.dimecasts.net/Casts/ByTag/SVN
I've used Subersion and .NET application together. Basically what we learned was that you should always do an update to your working copy before making a checkin. That way, any changes made by other developers will be brought down to your working copy and any merge conflicts will be quickly known to you. You can then fix the merge conflicts, checkin and continue to work. If your second developer then updates their working code, the first developers merged code will be brought down and the process will be repeated.
Hope this helps.
ignore the folders bin and obj, but we have bin and Bin.
use svn:ignore
[bB]in
[oO]bj
*.suo

What's the best way to manage storing builds in source control?

I'm using Perforce, if that changes the tune of the answers at all.
I'd like to implement a build process that, when a solution is built in a "release" mode, tags the entire source tree with a label and pushes the output of the build (DLLs, webpages) to a /build/release directory in source control. This directory should always contain the latest complete build, nothing less and nothing more, so I can yank that directory to production servers in its entirety and it's ready to go.
Now say I had a DLL in a previous release that the new build is not supposed to include. Does this mean the best practice for updating that /build/release folder is to check the entire thing out, delete everything in it, add the new build files, and sync it? Sounds like an obvious answer, but I want to make sure I'm not missing some other voodoo that might be a better way to do it.
I think you are missing the simple voodoo:) You should consider just using a plain old file system for your build drops. Source control is designed to manage change, versioning, and collaboration and there really is no need for any of this related to builds. The whole point to an build system is to be able to reproduce the source code and create the application at a moments notice so I would focus on being able to do that more than relying on the permanent storage of the output files. Be sure to back up the build drop folder structure just as you would the source control database. Use a folder naming scheme that includes the build number in the filename. I would store all of the builds (back at least several) because there are times when QA wants to restore an old build to test in order to compare features or resurrect a bug. Using this system every build gets a new folder so you don';t have to worry about deleting out old files.
I'd say "Yes" - you should to start with a blank folder structure for your builds (regardless of source control system).

Resources