My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short
Related
We have 2 solutions.
SolutionA is an internal solution where we put reusable code through our products
For the sake of the question, it has only two projects NugetProjectA and NugetProjectB which has a project reference to NugetProjectA.
SolutionB its a solution that has package references towards SolutionA via nuget.
The thing that troubles is:
add new method added in NugetProjectA
add new method in NugetProjectB project that uses previous method
publish new version of NugetProjectB
update nuget reference on Project of SolutionA
execute in Project newly added method of NugetProjectB
Since we didn't publish the NugetProjectA updated version, last step described will fail.
This seems to be a easy problem to solution. But imagine this with many more projects in SolutionB and many more in SolutionA.
Your question is vague and open-ended, so there isn't a simple, concise answer. Honestly it could easily be a multi-post blog series or even a short book. I'll try to give some general suggestions, but I'm not going to go into a lot of details to avoid making this answer too long.
Mono-repo vs multiple repos
Just like the microservice craze from a few years ago, you should first ask yourself if you need it. Having all your source code in a single repository and solution might make it feel "legacy", and it sure seems nicer to have a 3 minute build of a component rather than a 30 minute build of a whole system when checking in a 1-line bug fix. But are the problems you listed in your question worth the benefit of a shorter CI build?
On the other extreme, Google may famously run almost everything out of a single repository, but they have teams of people doing nothing more than managing their mono-repo and build system because a large number of developers working on a single repo have a different set of problems. These are engineers not working on customer applications. If their work make other teams more productive, then it can be worthwhile.
You need to figure out what's best for you, but given the problem you described in the question, maybe you have too many repos/solutions and your processes could be faster with fewer mistakes if you consolidate a little.
Automation
The next best thing is to just use good engineering practises. Automate as much as possible to reduce the risk of human mistakes, including automating processes that validate that manual processes are followed correctly.
Have the a CI or CD pipeline push packages, so you can't forgot to push dependencies, like in your example.
There are tools that will automatically generate a version number from the git history based on the number of commits since the versioning config file was checked in. This prevents you from forgetting to increment the package version when you check in a change to the package.
Have tests to make sure the package works before pushing it. You could make it as simple as making sure that dependencies exist, but you could also go further and have test programs run that use the package to ensure that backwards compatibility isn't broken and that new features actually work as expected. Have these tests run before pushing the package, so if the tests fail, you don't push a bad package.
You can have a pipeline that will automatically create pull requests on solutions that use a package when a new version of one of your packages is published. You can even automatically merge it, but you'll want to make sure you have real good tests to avoid a buggy package cascading into a huge mess, particularly if you do automatic deployments on successful builds.
Be creative and think about other ways you can automate your processes to make things easier for yourself and your team.
But be pragmatic about what you automate. There's no point spending a week full time to automate something that only takes you 5 minutes once a month to do manually. But if the manual process sometimes goes wrong and causes hours or days of effort to fix, then that makes automating the process more worthwhile.
Use modern features
The .NET ecosystem has changed a lot in the last 2-3 years since .NET Core was announced. Now packing with MSBuild (either dotnet pack or msbuild -t:pack) is easier to create packages than creating .nuspec files and making sure you do the right things to get project dependencies packed as nuget dependencies, getting all files in the right places, etc. If your class library uses SDK style projects, then there's nothing extra to do. If your project is a traditional project, you'll need to use PackageReference for your NuGet dependencies (or specify <ProjectStyle>PackageReference</ProjectStyle> as an MSBuild property in your project file), and then reference the NuGet.Build.Tasks.Pack package.
Version the application, not each package
Like the mono-repo vs multiple repos point, considering versioning all packages in the application with a single version number, rather than versioning each package individually. Yes, this means you'll sometimes (or maybe often) publish a new version of a package that doesn't have any code changes to the previous version, but it simplifies the release considerably. Coupled with packing with MSBuild in the section above, you can create a Directory.Build.props file in your repository root, and set the <Version> property to your app version, and all projects in the repo will have the same version. So, when you're ready to release, bump the version in a single file and every project and every NuGet packages will have the same version.
Summary
In an ideal world each component would be reusable in different applications, in a separate source code repository, each package individually versioned using semantic versioning. But in the real world this adds a lot of development time complexity. Your customers may be happier to get bug fixes and new features more quickly, even if the version number of packages are less meaningful. So, make data driven decisions. If you're frequently having dependency version problems, reduce your dependencies so there are fewer things that can go wrong.
Don't get me wrong, there are many good reasons to have multiple projects, multiple solutions, multiple repositories. Just make sure that the reason you're doing them is because it helps your team/company be more productive, not for idealistic reasons that are slowing you down or causing bugs.
I am adding a typescript project to a VS2015 MVC 5 project (that makes it ASP.NET 4, not asp.net 5 or asp.net 6 code: only the MVC is version 5). This is the sole target for all aspects of my question, I cannot use generalized or theoretical guidance on typescript, node.js, module loaders, etc.
The problem is simpler with ASP.NET Core. But that's not what I'm facing. All the usual sources of examples and guidance avoid or provide scraps when it comes to ASP.NET 4 MVC 5 because it is hard. And no one will state exactly how hard or what precisely are the obstacles.
Worst, Typescript documentation is like Open Source Documentation: you can only get one issue, one step deep. This produces a research workflow consisting of endless-issue-tree-recursion.
I understand opinions, I even have one. But I'm looking for the experiential answer, what is the one combination that has proven to work for a production team.
So here are the specific items that need to be addressed and made to work within the confines of a working, medium-sized ASP.NET 4 MVC 5 LOB app:
Visual Studio's version of typescript. This is an installation issue (using node most simply) and the Tools/options - typescript settings have to match.
Browser-style testing (typically manual TDD workflow) or node.js testing (automated). This has to be chosen up-frontly to prevent more issue-tree-recursion. We are going with browser-based... phantomjs using WallabyJs.
NPM #types/library-name: supposed to fill a node_modules folder with both library-name and library-name.d.ts based only on a package.json with #Types references. But actually requires the package.json to hold a reference for both the #types/library-name and the library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project. And all the versions specified then require manual correction' and even then the version look-up process is a little suspicious. This #types process may not be the way to go with ASP.NET 4 MVC 5, but I can't tell what the correction alternative might be. #Types is currently the only recommended option for typescript.
Which version of ECMAScript: es6 is apparently too far ahead. es2015 is likely, but this (maybe) appears to have relationships to several of the other issues. Supposedly, these designations are the same, but there are two places they can be set. I've chosen es2015 in tools/options/typescript. But getting any of these (now 3) settings wrong could be a problem.
Module system: CommonJs is for node and automated testing, and VS development testing is automated only for server-side, and VS UI tests are a manual process. So AMD, require JS is probably an option for VS, but it adds its own workflow and maintenance and considerations that are really hard to get right in ASP.NET. Using ASP.NET bundling and triple-slash references (dependable) might work, but after you have put the libraries in node-modules, you would want to use the full path into node-modules in the file name slug in an import statement. This is all very clumsy and involves the most guesswork. But solving this whole item might be the 'key' for the overall question.
There are probably a lot of other, smaller issues. But someone who has done this will have solved all the mentioned items and the others as well.
What I'm looking for is all the settings across all these issues in detail based on a working Typescript app in ASP.NET 4 MVC 5 implementation for browser-based unit/behavior tests in VS 2015. Those who have done it will understand.
Thanks very much for your consideration.
What you're missing is separation of concerns, in spite of the initial benefit of such starter templates, they start to cause incidental dependencies and complicate the mental model. It's much easier to have your front end in a separate project.
Regardless:
Visual Studio's version of typescript.
Always use the very latest available. This controls the version of TypeScript which powers the IDE. You will probably end up compiling in a separate process or in the browser during development. Again you will want to use the latest but it will likely be installed with a different package manager.
Browser-style testing (typically manual TDD workflow) or node.js
testing (automated). This has to be chosen up-frontly to prevent more
issue-tree-recursion.
Firstly, I definitely agree with the importance of choosing up front but, if it is still possible, just unpleasant, to add tests to an existing project.
TDD workflows involve automated testing as they rely on rapid feedback. This is orthogonal to whether you run your tests in the browser or using NodeJS.
You should use whichever approach makes the most sense for your application and that may be a mix of both.
Since you are writing a frontend JavaScript application you will likely want to run some tests in the browser. However, as Uncle Bob (Robert C. Martin) has stated, views should be dumb and require little testing. My interpretation of this is that we should not spend too much time testing things like Angular or React components to ensure that they render correctly, and instead focus on testing behavioral elements of the system such as services and plain old functions.
That said, you may well want to run tests of your client side services against an actual browser runtime, as opposed to just Node.js, and that is reasonable.
There are a number of testing libraries to help you with this. I do not have a specific recommendation besides to say that you should find a reliable test runner, and a simple assertion library. Tried and true testing libraries like QUnit and Tape are examples of solid options.
One last note, it is important that one not confuse the concept of integration testing with running tests in a web browser, it's perfectly valid to run TDD style tests, which implies unit tests, in a web browser.
NPM #types/library-name: supposed to fill a node_modules folder with
both library-name and library-name.d.ts, but requires the package.json
to hold a reference for both the #types/library-name and the
library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project.
Simply put this goes to back to decoupling your front end from your back end. Visual Studio and certainly ASP.NET have nothing to do with versioning your types packages.
If a package comes with its own type declarations then you don't need to install an auxiliary types package otherwise you do.
Either way install JavaScript and TypeScript dependencies using a JavaScript oriented package manager (such as NPM, JSPM, or Yarn).
Do not use NuGet for these!
As you suggest there are versioning issues, it's currently a difficult problem in TypeScript. However once again it has nothing to do with ASP.NET or Visual Studio.
Which version of ECMAScript: es6 is apparently too far ahead. es2015
is likely, but this appears to have (maybe) relationships to several
of the other issues.
ES6 is the same as ES2015, the latter being the name under which the former was ultimately released. ECMAScript now follows a yearly cadence, roughly, with ES2017 just around the corner.
The nice thing about having a transpiler, such as TypeScript, is that you can use the latest features from es2017 and still target es5 for emit and you'll be fine.
Module system: CommonJS is for NodeJS and automated testing, and VS
development testing is automated only for server-side, and VS UI tests
are a manual process. So AMD/UMD require JS is probably the option for
VS, but it adds its own workflow and maintenance and considerations.
Using triple-slash references (dependable) might work, but after you
have put your/their libraries in node-modules you would want to use
the full path into node-modules in the file name slug in an import
statement. (solving this whole item might be the 'key' for the overall
question).
This is a very complex subject and probably the only one of your questions that you really need to spend a lot of time considering. As I said earlier using NodeJS or not is orthogonal to automated testing. But if you're targeting NodeJS natively with your test code then you will need to use CommonJS output.
For the actual application code, the choice has nothing at all remotely to do with whether or not you are using Visual Studio, I'm sorry for reiterating this but it really is important that you separate these ideas.
The question of which module format to use for your front end application code is a very important and contentious one.
Triple /// references are not a module format but rather a way of declaring the dependencies between global variables that are declared and referenced across multiple files.
They do not scale well, working acceptably when you have only a handful of files.
Triple /// references should not be used. They are not a modularity mechanism and their use is completely different from using any of the module systems/module formats you mention, including CommonJS.
Never combine them with a module system, which is what you would have to do in order to run your tests under NodeJS or load your app with RequireJS or anything else.
RequireJS is an excellent option which would imply AMD modules as you say. RequireJS does not require any use of triple slash references. In fact they should be avoided as the plague when using this format or any other module format!
I recommend strongly against using UMD modules. Isomorphic JavaScript is a problematic idea, and it offers you no benefits since you are creating a browser application with a .NET backend.
Many developers actually do use CommonJS modules in a browser. This requires bundling them continuously, using tools such as Webpack. This approach has advantages and disadvantages. The primary advantages are the ability to lean on existing NodeJS JavaScript server-side tools, such as npm, by way of Webpack or Browserify. This may not sound like a big advantage but the amount of rich tooling available for CommonJS modules is nothing to scoff at, making it a strong option.
Consider using the System module format and the SystemJS loader via jspm to both manage your packages and to load your code. With this approach, you gain the advantages of RequireJS, are able to run your tests under NodeJS and the browser using jspm run without needing to switch targets formats or bundle your code just to test it. There's also no need to bundle your code during development, although this is supported. More importantly, you gain the advantage of writing future compatible code, as it offers the only module format and loader which correctly models the semantics that ES Modules will eventually have when implemented natively in browsers. JSPM has first class support for TypeScript, Babel, and Traceur.
For posterity here is the description of the System module format taken from the link above:
System.register can be considered as a new module format designed to support the exact semantics of ES6 modules within ES5. It is a format that was developed out of collaboration and is supported as a module output in Traceur (as instantiate), Babel and TypeScript (as system). All dynamic binding and circular reference behaviors supported by ES6 modules are supported by this format. In this way it acts as a safe and comprehensive target format for the polyfill path into ES6 modules.
Disclaimer:
I am a member of the JSPM GitHub organization, playing a role in maintaining the registry and have made very minor contributions to the jspm cli.
In our build system ,once build is over ,assemblies also will be check-in . But When we were moving to UCM , architects divided in this opininon. Few people supported check-in the compiled assemblies and msi and few were opposing it.
when we were check-in ,we simply do symlinks and it gave us great advantage. Moreover when check-in was done ,it will be removing entries from bin and release folder instead of copy. It helped us lot. Every day people were able to work with latest assemblies check-in by nightly build. Now they are not able to do that. They want me to copy the Nightly build dll to some common place.
On other hands due to every day check-in our repository grows humungous.
I don't know what was the best option.
Can you share your thoughts about which method is better? Is it better to check-in assemblies in UCM/Clearcase or not?
AS a practice, all build outputs should not be kept under source control. But, you have to keep them on a common place until they expire. The philosophy behind this practice is:
The repository size gets larger as you add binaries to it.
The old versions of an assembly produced in a nightly build (belongs to 2 years ago) are useless. On the other hand, the old version of source codes and its history are useful all the time.
In addition to the build results, software products are usually dependent to third party components. These third party components are usually evolved and their newer versions are usually released. Keeping build results in the source control, you have to keep the right version of third party components somewhere else.
Another reason against checkin assemblies in ClearCase is the lack of clean-up feature possible: you cannot easily rmver some versions you might not need without potentially compromising the Vob repo.
This is especially true in UCM where metadata and hyperlinks are added to versions, making the removal of one quite dangerous for the integrity of other objects (like an UCM baseline) depending on it.
For the other, more general, reasons for not versioning binaries, see "Is it good practice to store framework runtimes under source control?" and hsalimi's answer.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As recently as several years ago, the developers actually made the builds that went to clients. This was obviously a disaster for reasons too numerous to list.
Then when we started to learn the errors of our ways, we looked for a way to auto-build the entire application on a dedicated build machine. The culture at that time was very averse to bringing in outside tools, so we built our own autobuild system by writing a VB app.
This worked fine for a while, until the project's structure started to change, new projects were added, and we needed to build the application in different ways. Then then weaknesses of our hand-rolled autobuilder became apparent and, over time, increasingly onerous. This disease has progressed now to the point where QA (who owns our build process) can't even maintain the autobuilder because it requires more and more programming skill. Every time we add a project or change something in an existing project, it consumes more developer time just to make it work. There have been days when we were unable to produce a build because the system was broken.
I'm now in a position where I can change this process, and I'm looking to scrap the entire system and put something else in it's place. My goals are:
Have an autobuild system that can run with zero human interaction at a specific time every day. It should be able to gather all the source code, compile all the apps, create the setups, put the finished products on a network share, and possibly trigger the automated testing system to kick in (we use QTP).
The autobuild system should be flexible enough to easily adapt to changes in the project without rrequiring a major overhaul.
It should be simple enough so that QA can own the system and not require developer resources to make changes to how builds are made.
What are your experiences? Can you recommend an autobuild system? Should I have different goals?
I'm currently using CruiseControl integrated with Ant to control project builds. This allows flexibility of build schedules and means you can automate the entire build process fairly easily using Ant scripts. Also, during defect fixing periods you can have CruiseControl set up to watch for source control submissions instead of time periods and build when these occur. This allows developers very quick feedback on defect fixes.
I use FinalBuilder and FinalBuilder Server for nightly builds. It's a bit buggy at times, but if you think it through it's quite easy to create extensible projects that can build X project type, build it's database from change scripts and deploy it to a testing server.
It can also handle all kinds of wierd and wonderful things like ZIPing a nightly build and uploading it to an FTP or creating ISO images automatically.
Definitely look into MSBuild if you're on the Microsoft stack.
Joel is always going on and on about how great FinalBuilder is, so that might be worth a look as well.
We just migrated from a hand-rolled set of Perl scripts to a Buildbot setup. I found it because that's what Google's using for Chrome.
You can do nightlies, or it can integrate with source control to do an isolated test build whenever anybody does a checkin, or a variety of other things. It's also parallel; you can have more than one machine in the build farm, either for specialized duties or just to handle more load.
The entire system is written in Python, so it's platform-agnostic, which is important if you need to do builds on more than one platform. It can do anything you can do from the command line; we have it calling MSBuild for user-mode components, a DDK build for kernel-mode pieces, and running products for unit test builds.
Out of the box it supports most OSS source control tools, but if you're using TFS or something else you may need to modify the package that you install on the slave machines.
I think you are on the right track here.
Whoever looks after your automated build process needs to have a fundamental understanding of how your solution fits together. This doesn't necessarily mean knowing how to write code or architect solutions, but they will require a solid understanding of how the solution compiles, packages itself etc.
You might need to share responsibility for builds between people or teams to accomplish this. I'd say that a daily build is a "team responsibility".
I'd look at establishing a baseline build configuration which can be extended for "special use" builds (besides just building a release version), e.g. internationalized releases, fxCop/Quality Tools config, build + run Unit Tests, continuous integration builds, a build config to run on developer workstations, etc.
Instead, I'd aim to achieve the following:
Automatic versioning, signing etc
Ability to produce verbose output (logging) to help debug build breaks
On that point - it should handle errors properly, capture as much information and log it properly
Consistency - It should work the same way each time to produce repeatable outcomes
Run in a clean, limited access environment
Well commented/documented so that it can be understood by new staff, etc.
Option to generate release notes, compile metrics, produce reports (if this option is available)
Ability to deploy to multiple environments
Support different ways to obtain source code from source control, e.g. by changeset, label, date, etc
As for tool recommendations, I've used FinalBuilder, Visual Build Pro, MSBuild/Team Build, nAnt, CruiseControl and CIFactory plus and good old fashioned batch files.
Each has its pros and cons, I'm not going to make a recommendation except to say that the products with decent UI support were a little bit easier to work with, but at times were far less powerful. If you're working with VIsual Studio, MSBuild is very powerful, but has a somewhat steep learning curve.
As of tools delivered with MS Visual Studio you might want to use MSBuild. Additional Community toolsets for MSBuild will even give you the possibility to checkout code from Subversion and zip output.
We're using it successfully in our company. Projects consists of several solutions with 100+ subprojects. Works like a charm.
Visual Build Pro is nice, if your build machines are Windows. I think this would fill the requirement you have about QA owning the system. But don't get me wrong, it's pretty powerful.
We use CruiseControl.NET and UppercuT (which uses NAnt) to do this. UppercuT uses conventions for building so it makes it really easy for someone to get started by answering three questions (What is the solution named? What is the path to source control? What is your company's name?) and you are building.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
We use the Hudson buildbot for for big Java web app building from ant build scripts. Hudson is pretty sweet for our purposes. It has a master/slave setup so builds can be done concurrently (on a timer or on-demand). Slave nodes can be any OS/hardware combo provided it has the needed build tools already on it and is on the network (and won't crash every 10 min).
Full web-based interface including live console output, change logs, artifacts from the build are available across the network including previous builds (if successful). Awesomesauce!
One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.