We're in the process of streamlining/automating build, integration and unit testing as well as deployment.
Our software is developed in Visual Studio where we have use both C# and VB.NET in our projects. A single project can be contained within multiple solutions (i.e. Utils project is used in both ProductA and ProductB solutions)
For historical reasons our code repository isn't as well structured as one could have hoped for.
E.g. Utils project might be located under ProductA solution (because that's were it was first used) but it was later deemed useful for productB development and merely just included into the solution of productB (but still located in a subdirectory of productA).
I would like to use continous integration testing and have setup a CC.NET build server where I intend to use NAnt for creating the actual builds.
Question 1: How should I structure my builds on the buildserver? Should I instruct CC.NET to retrieve all the projects for productB into a single library e.g. a file structure similar to
-ProductB
--Utils
--BetterUtils
--Data
or should I opt for a filestructure similar to this
-ProductA
--Utils
-ProductB
--BetterUtils
--Data
and then just have the NAnt build scripts handle the references? Our references in VS doesn't match the actual location in the code repository so it's not possible today to just check-out productB solution and build it straight away (unfortunately). I hope this question makes sense?
Question 2: Is it better to check out all the source code located in different projects into a single file folder (whilst retaining some kind of structure) and then build every thing at once or have multiple projects in CC.NET and then let the CC.NET server handle dependencies?
Example:
Should I have a seperate project in CC.NET for monitoring the automated build/test of Utils project when it's never released on it's own? Or should I just build/test it whilst building it as part of ProductB?
I hope the above makes sense and that you can provide me with some arguments for using either option. We're nowhere near an ideal source code repository structure and I would prefer if I can resolve the lack of repository structure on the build server instead of having to clean up the structure of our repository.
Switching away from VSS is (unfortunately) not an option.
Right now our build consists of either deploying via VS clickonce or pressing F5 so just getting the build automated would be a huge step up for us.
Thanks
To answer your first question, I would recommend a separate top-level folder for each build project. The problem with having a single tree matching your source repository is that when your build server is trying to run multiple builds at once, one or more will likely fail due to files in use by other processes. Also, you may run into cases where a build script is pulling an older version of the code. In that instance you don't want a different project to accidentally use the incorrect source version.
If your solutions already reference projects from relative paths, you may end up with a structure like this:
-CCNetBuilds
--ProductASource
---Utils
---...
--ProductBSource
---ProductA
----Utils
---ProductB
----BetterUtils
----Data
In this case, the build for Product B contains part of the Product A source, at the same relative path as your solution already expects. This takes a bit more time to set up in CC.Net, but makes it easier to maintain if the developers have their code set up this way on their machines. The same solution files used in development are used by the build server.
To answer your second question, I prefer Utilities being its own build. If I have unit tests on my Utilities assembly, I would not want them to run for every single product that uses the Utilities. Also, if you have a separate build for Utilities, you can set a dependency in CC.Net so that Product A and B will not attempt to build if the Utilities build is broken. This provides a bit faster feedback that something is wrong.
Related
When I was learning how to use premake, I remember reading a wiki page or perhaps a forum post somewhere (I wish I could find the original link) suggesting that project files generated by your premake scripts may ultimately be run on different machines than the one you're running premake on. So, I took this idea and designed premake scripts accordingly to replace the existing autotools/VS/Xcode project files in an open-source project I contribute to. This project uses a variety of third-party libraries, some mandatory and some optional.
What I started to discover, through both my own experience and through feedback from other developers, is that it's pretty tough to generate generic project files (gmake files, especially) that will work on other machines, especially when it comes to finding the location of system libraries to link to. It also seems like you're completely giving up your ability to auto-detect the state of things on the build machine and enable/disable optional build settings in your project accordingly, and in lieu of errors you could have displayed during configuration in a user-friendly format (missing dependencies, etc.), you have to rely on cryptic compiler errors to tell users that they're missing something.
My question is for those have experience using premake in a production environment: is it a reasonable goal to be able to transfer premake-generated project files to other machines and still have them work, or should you design your premake scripts around the assumption that users will run premake locally because build environments are so diverse?
For simple or self-contained projects, certainly—the official Premake releases ship with pre-built project files, for example. But for more complex projects it generally makes more sense to just ship the Premake scripts (i.e. premake5.lua) and ask developers to download and run Premake locally to generate the final project files, for the reasons you specified.
Here's our problem, we are a Flex shop that uses .NET for the server side logic. We use subversion for our source control and subeclipse in Flex Builder but are still quite new to using source control let alone subversion. Branching and merging seems to work very well on the .NET side but we are running into issues on the Flex side because of the final swf being built on our local machine.
The question is, what does a usual workflow look like for working with Flex and SVN? Particularly, how do you branch and where do you build?
Personally, I keep the Flash/Flex source code in a separate SVN repository that is away from what is deployed to any sort of web server. That way I can create branches and tags specifically for my Flash/Flex application. I also tend to publish any SWF's directly into my local copy of the deployment repository. It does not make sense to me to keep a published SWF under version control unless its part of the what is deployed to the server. I don't like to keep committing an SWF into my Flash source code repository because it takes up unnecessary space and all the source code should represent the latest version, not the resulting SWF.
You'd probably want to branch your project alongside your .Net project so your flex releases are consistent with your server logic.
We use a directory structure like this
+server-side-app
--trunk
--tags
--branches
+flex-client-app
--trunk
--tags
--branches
I would recommend something like that for yourself.
I agree with Matt W. At AKQA we have svn locations four our source and assets. We set up an svn ignore for the bin folders of a project. That way we aren't checking any swfs which means when we update we don't get someone elses swfs or output files.
A good bet is to look into continuous integration with something like cruise control. We build our output on the server which generates all of the files into one location on the server. There are loads of other benefits of continous integration and it's well worth having
I am making a very large web app (currently at 70 projects and 150k loc but with a lot more to do).
I use FinalBuilder to run build scripts. However, what are the best practises for structuring such a large project? What about build dependencies? What effect does the structure of my projects have on the performance on the code (if any)?
I've seen some previous threads about this but I can't find those. I've seen threads about solutions exceeding 600 projects in the solution, for the sake of clear answers, lets imagine this system will grow to that size (I would like to know how to organise a project bigger than what mine ends up to be, as it would mean I can organise a smaller solution).
If it matters, the system is mostly in .NET 3.5 (C#, LINQ, SQL Server etc) but will use Python/Erlang too.
I have only 40 projects (but several millions of loc), and the main best practices we have is:
identify dependencies between projects
establish a global list of labels used by all projects wishing to participate to the next release
make sure that every project willing to publish a label of its own into this global list has made that label from a configuration (list of labels) coming from the global one
register the "officials builds" (the one potentially to be deployed into production) into a repository.
That way:
developers works and compile their code directly against the deliveries of the other projects they depends on (as opposed to download the sources of the other projects and rebuild all in local).
They only have the right deliveries because they know about their dependencies (both immediate and transitive)
testers can deploy quickly a set of deliveries (from the global list of labels) to perform various tests (non-regression, stress-tests, ...)
release management can deploy those deliveries (after having a final global build) onto pre-production and production platforms
The idea is to:
not rebuild the delivery at every steps
build it only at the development stage (through a common unified building script)
build it again before release (for pre-production and production platform)
compile and/or test only against those deliveries (and not against sources downloaded and re-compiled for the occasion: when you have more than a few projects, it is just not practical)
Main best-practice:
If your own project works with the deliveries of the other projects (and not with your local re-build of those other projects), it have good chances to work in the next steps of the software production life-cycle (test, pre-prod, production)
Have you considered using NMaven and making each of the 70 projects a module? That would allow you to control the building, packaging, versioning, and release of individual modules and the parent project as a whole. It would also help you resolve the depedencies between the different modules, external libraries, and even versions and different lifecycle scopes (for example, you only need NUnit during the testing lifecycle, but don't need to package it in the build).
It might help to explain in greater detail what these projects look like and how they depend on each other.
A bit open as a question. Let's start with a basic structure I suggest as a starting point to my customers, inside a branch I have
Build Scripts
Build Dependencies - things to install on a build machine
Libraries - LIB, DLL, ... directly referenced from projects
Documentation Sources - help sources
Sources
Deploy Scripts
then Sources is organized in
Admin - admin console and scripts
Database - schemas, scripts, initial data
Common
Web
NTServices
Services - REST/SOAP services
BizTalk - you name things specific to a product
What are the strategies for versioning of a web application/ website?
I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files?
It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application.
NB I'm not sure that I have been totally clear with my question.
What I want to know is how to build and auto increment a version number for an asp.net application.
I'm not interested in how to link it with svn.
I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work.
For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php).
For just websites I usually just have a revision number (1,2,3,...).
I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger...
Tried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner "yeah... that's getting pretty old now..."
Unfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean "rotten on the inside?"
... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong.
(I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....)
I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the "metarepository" could possibly be used for versioning the complete system.
In another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method.
An inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2).
During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates ("the January 2009 update").
In my automated NAnt build we have a step that generates a lot of code off of the database (using SubSonic) and the code is separated into folders that match the schema name in the database. For example:
/generated-code
/dbo
SomeTable.cs
OtherTable.cs
/abc
Customer.cs
Order.cs
The schema names are there to isolate the generated classes that an app will need. For example, there is an ABC app, that will pull in the generated code from this central folder. I'm doing that on a pre-build event, like this:
del /F /Q $(ProjectDir)Entities\generated*.cs
copy $(ProjectDir)....\generated-code\abc*.cs $(ProjectDir)Entities\generated*.cs
So on every build, the Nant script runs the generator which puts all the code into a central holding place, then it kicks off the solution build... which includes pre-build events for each of the projects that need their generated classes.
So here's the friction I'm seeing:
1) Each new app needs to setup this pre-build event. It kind of sucks to have to do this.
2) In our build server we don't generate code, so I actually have an IF $(ConfigurationName) == "Debug" before each of those commands, so it doens't happen for release builds
3) Sometimes the commands fail, which fails our local build. It will fail if:
- there is no generated code yet (just setting up a new project, no database yet)
- there is no existing code in the directory (first build)
usually these are minor fixes and we've just hacked our way to getting a new project or a new machine up and running with the build, but it's preventing me from my 1-click-build Nirvana.
So I'd like to hear suggestions on how to improve this where it's a bit more durable. Maybe move the copying of the code into the application folders into the NAnt script? This seems kind of backwards to me, but I'm willing to listen to arguments for it.
OK, fire away :)
How often does your DB schema change? Wouldn't it be possible to generate the database-related files on demand (e.g. when the schema changes) and then check them into your code repository?
If your database schema doesn't change, you can also package the compiled *.cs classes and distribute the archive to other projects.
We have two projects in our solution that are built completely out of generated code. Basically, we run the code generator .exe as a post-build step for another project and along with generating the code, it automates the active instance of visual studio to make sure that the generated project is in the solution, that it has all of the generated code files, and that they are checked out/added to TFS as necessary.
It very rarely flakes out during the VS automation stage, and we have to run it "by hand" but that's usually only if you have several instances of VS open with >1 instance of the solution open and it can't figure out which one it's supposed to automate.
Our solution and process are such that the generation should always be done and correct before our auto-build gets to it, so this approach might not work for you.
Yeah I'd like to take VS out of the equation so that a build from VS is just simply compiling the code and references.
I can manage the NAnt script... I'm just wondering if people have advice around having 1 NAnt script, or possibly one for each project which can push the code into the projects rather than being pulled.
This does mean that you have to opt-in to generate code.