Given a set of code that needs to behave slightly differently on varying production servers, you can create release configurations in asp.net with different symbols so that the compiler can compile the code under each release configuration in the desired manner. However, the web.config file for each of these production deployments might need to be identical or nearly identical. Even so, web.config will vary as you deploy to dev/QA/staging servers, which is why we have the feature of having a transforms based on combinations of publication profile (often used to distinguish between dev/QA/prod deployments) and release configuration.
How can I set up using the same base transform for all of my production publication profiles when I need multiple production publication profiles? In my above example, I need multiple production publication profiles because I need to deploy with different symbols for the compiler even though web.config might be identical or very similar. If I don't reuse, I end up with duplication of the production publication profile transforms across each of the different release configurations.
For example, suppose I am setting up a website to work differently if accessed within a network vs called from outside the network, and I'll use two separate web servers for that variance in the deployment of the website. Where before I had release configurations Debug and Release, I now have additionally Internal Debug and Internal Release. I have four release configuration transforms (Web.Debug.config, Web.Internal Debug.config, ...). For publication profiles, I have ProductionExternal and ProductionInternal. Everything up to this point seems ok. However, then I also have to create Web.ProductionExternal.config and Web.ProductionInternal.config which set up connections to databases and the like that are identical or perhaps nearly identical. I'd prefer to instead be able to say inside Web.ProductionInternal.config to use everything in Web.ProductionExternal.config plus XYZ modifications so that as I need to update, I only have one place to update.
This problem gets worse as you look across dev/QA deployments, or if there are more publication profiles that are similar.
https://www.codeproject.com/Questions/553960/Chainedplus-2fplusNestedplusWeb-configplusTransfor appears to be describing the same problem I'm facing, but never got an answer.
Related
We're running TFS 2015 and VS.NET 2015 for a large solution with an ASP.NET web app as the main project and several class library projects.
I'd like our team to start utilizing branches but the concept of branches being in separate folders is causing all sorts of issues with configuration.
Once the branch is completed the entire folder structure and web.config values, project references, reference paths etc are all now different, as the solution is being opened from a different folder than the main branch.
We use IIS virtual directories so that also doesn't work due to the new folder for the branch.
If I go ahead and make all of these manual changes to make our solution work from the new branch folder, then every time we do a forward integration from main->branch all of this config of course gets overwritten, and every developer on the team would need to redo this config
Surely there's a better method to handle branches for larger solutions which have a high level of config and customization, is there a way to keep a single physical folder and just specify which branch you want to work on?
Don't use long term branches. After moving from using long-term branches to a single main branch four all our teams I would never go back. The merges were always terrible, even for seemingly simple changes.
We now use Release Readiness analysis to allow multiple developers to work in parallel on different features. Check it out -
https://dotnetcatch.com/2016/02/16/are-you-release-ready/
I have a project (web), that interacts heavily with a service layer. The client has different staging servers for the deployment of the project and the service layer, like this:
Servers from A0..A9 for developement,
Servers from B0..B9 for data migration tests,
Servers from C0..C9 for integration test,
Servers from D0..D9 for QA,
Servers from E0..E9 for production
The WSDLs I'm consuming on the website to interact with the service layer, change from one group of server to the other.
How can I keep different versions of the WSDLs in the different branches using a git workflow with three branches (master, dev, qa)?
As you explained in your comment, the branches will be merged, and the WSDL files will conflict. When that happens, you have to resolve the conflict by keeping the right version of the WSDL file.
For example, if you are on qa, merging from dev, and there is a conflict on a WSDL file, you can resolve it with:
git checkout HEAD file.wsdl
This will restore the WSDL file as it was before the merge, and you can commit the merge.
However, if there are changes in the WSDL file but there are no conflicts, then git merge will automatically merge them. If that's not what you want, and you really want to preserve the file without merging, then you could merge like this:
git merge dev --no-commit --no-ff
git checkout HEAD file.wsdl
git commit
UPDATE
To make this easier, see the first answer to this other question:
Git: ignore some files during a merge (keep some files restricted to one branch)
It offers 3 different solutions, make sure to consider all of them.
I think branching is the wrong tool to use. Branches are highly useful for (more or less) independent development that needs that take place in parallel and in isolation. Your use case of multiple deployment environments doesn't appear to fit that model. It also doesn't scale very well if you need multiple branches for e.g. releases and always have to create and maintain the deployment-specific child branches.
You would probably be better served by a single branch that defines multiple configurations (either for deployment or building, depending on what consumes the WSDL file).
This assumes that the WSDL file is the only different between the branches, which is the impression I get from the question.
During our development of schemas orchestrations, ports, etc. We've been exporting MSI's and binding files for deployment into our test and ultimately production environment
So, for example, we set up a series of receive ports/locations in a single BizTalk app, for the purpose of receiving all HL7 v2 messages from our HCIS. We then exported that to a bindings file, and imported into test.
Then, as we developed new schemas, we exported each schema into it's own msi file and deployed that into the same BizTalk application in our test environment. We did that because the schemas are specific to the inbound messages from our HCIS.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application. So if I want to uninstall and re-install a particular schema, I'm not sure what will happen. For some reason, I half expected to see an entry for every msi I installed, but I suppose that because they're all going into the same BizTalk application, they are all registered in windows as the same application. I'm betting there is a better way to do this, any suggestions?
You can, and probably should, create different applications for each logical grouping of code. If you examine the 'deploy' section of the project properties you'll see a text box to enter your application name. When you trigger a deploy they will be placed into a separate application with the name you provide. You'll see it in the BizTalk management console.
We deploy to dev using the framework mentioned below. Then to deploy to QA right click on the application and create an MSI from that point. It will allow creating an MSI for only one application.
NOTE: the deploy setting is NOT saved globally. If another developer opens the project his project will not inherit the application name you've set.
We use the biztalk deployment framework to help manage changes when we do development.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application.
I can only think of two scenarios where you might observe this behaviour:
You have multiple different MSI's (once for each schema) which you are importing into BizTalk (and hence they are appearing in the BizTalk Admin Console), but you are not running the MSI on the local machine (and so it is not appearing in 'Installed Programs'); or
You MSI's are all named the same, in which case after the import into BizTalk and the local install, you only have a single program visible in 'Installed Programs'.
I'm betting there is a better way to do this, any suggestions?
With regards to approach, you are certainly along the correct lines. I tend to advise clients to group logical artifacts into a single logical bucket - either project or Application - that can be deployed (and redeployed) without affecting other parts of the system.
In a HL7 scenario, one logical bucket might be Patient artifacts (schemas and supporting maps) and a second may be Financial artifacts (schemas and supporting maps). These logical buckets can either be deployed to different BizTalk Applications, or the same BizTalk Application depending on your requirements. However, the main benefit here is that they are separate and therefore all artifacts do not need to be redeployed if you need to make a small modification to A19 - Patient Query/Response schema for example.
How to deploy is another question entirely. I'm a massive fan of MSBuild and have written comprehensive build scripts that I tweak and reuse for each project I work on. These deployment scripts will tear down an existing environment and re-build from the ground-up, creaing Applications, deploying Resources, importing Bindings, creating Hosts and Host Instances etc. before finally starting the application. This approach removes all human error from the process and tends to be favoured by clients who often have their infrastructure teams perform the deployment rather than their development teams.
I notice that Jay mentioned the use of the BizTalk Deployment Framework. I personally struggle with this tool, partly because I need to maintain my configuration in Excel which I can't check in to source control easily.
I have an ASP.NET project under git where we follow the convention of using a branch for a feature. We just started using SQL Server Data Tools to manage schema changes (quite new to it, so I suspect it may have features that get me to what I need).
I am looking for some strategies that have worked for other teams that manage switching between branches that have different DB schemas and then successfully merging branches together. Ideally, after merging all the features, I would have implicitly created a change script(s) to deploy for the release to production.
Note I am using SQL Server 2008 R2
There are multiple parts to this strategy. One aspect is the handling of the storage of the different branches, and what has worked well for my teams has been to use different SQL Server instances for each branch (rather than naming individual databases with branch-specific prefixes or suffixes, e.g., MyDatabase_FeatureBranchX, which can get out of hand). This enables the corresponding database(s) in each branch to have the same names (for clarity) but also allows for physical and logical isolation of a given branch's SQL resources (data files, access permissions, etc.).
As for the second, more interesting aspect (which I think is the main intent of your question), you might consider utilizing a code-based "migrations" approach -- e.g., using FluentMigrator or the like. Provided that you've got a standard baseline schema from which each branch was initially created, you can create the appropriate migrations in code as part of your feature development in each branch (and apply them to that branch's SQL instance). When it comes time to merge the branch into trunk, you'd also be merging and then applying that branch's migrations.
At best, this means that you could simply run the migration tool against your trunk instance after the merge, in order to apply all the branch's migrations, since tools like this automatically keep track of which migrations have been applied (via a custom database table) and do not reapply them. Provided that you're also doing periodic merges of your trunk code (including its migrations) into your feature branch throughout its development, and you're applying those migrations, you would also be ensuring that your feature branch's schema is being kept up to date, which minimizes the nasty surprises at merge time.
When it comes time to deploy your trunk to production, these same migrations would be applied once again. FluentMigrator offers various runners: a console application, NAnt, MSBuild, and Rake.
I would highly recommend using a timestamp-based (e.g., 201210241033) migration ID strategy, rather than simple sequential integers (1, 2, ...), to minimize the likelihood of collisions and changes being applied out of the intended sequence.
We're using Jenkins to build an ASP.Net web application and deploying successful builds to stage/test server. The application has multiple configurations (different connnectionstrings, themes, etc) to adapt to different customers.
So, using a multi-configuration job was the natural way to go. This works great for building and deploying all configurations in one go. But what if you only want to build one or a couple of the configurations?
Typical scenario when this would be nice:
The developer completes a milestone/version, test phase starts and 10 configurations are built and deployed on the stage server
Test team identifies a bug in configuration X (i.e. customer X)
The developers fixes the bug (or so they believe) and want the code re-tested
Run the Jenkins job again to get the code on to the stage server
This scenario builds ~9 configurations for nothing. And while these 9 configurations are deployed, anyone who is logged in on one of these test web sites are of course loosing their sessions.
We would like have some parameter that let's us select which configurations to build.
A couple of potential solutions:
The Matrix Reloaded Plugin which should let you rebuild only certain configurations.
Alternatively, when you configure the job, you can enable the "Combination Filter" feature, which tells Jenkins which combinations of the matrix axes to build. However this isn't very dynamic — i.e. you can't change this each time you build. Though maybe it's possible to parameterise this field (I haven't tried this).