I have been applying separate web.config files for each environoment as my ASP.NET application progresses through Development, IT, UA and Production and have been looking for a way to simplify this process.
In the past, these had been updated manually; this was tiresome and prone to human error. More recently, I've been using IBM's uDeploy to push the application with an environment-specific config file deployed with the application depending on the target environment.
I've seen many suggestions, such as separate config files (as per my current setup), use of pre-build events etc. However, I implemented a solution to this issue in our test environment whereby I assigned the database connection strings to environment variables on my application server. The relevant environment variable is then passed into my data access connection method.
In other words, each environment's application server has an environment variable with the same name but with a different value assigned. This solution is quite simple and easily implemented and appears to function correctly.
Does anyone else manage separate environment configurations in this way? Are there any disadvantages to this approach that I have failed to consider?
Related
This may seem a bit trivial...but how do you go about transforming the db connection for a nopcommerce app as it is deployed to various environments.
The db connection is set in app_data\datasettings.json.
Normally this type of stuff is handled with web.config transforms.
How do you go about setting up build transforms for different environments (dev, test, prod)?
I am also looking around this topic.
In my humble opinion, the nopCommerce config is a pain, because it makes it really hard to do proper Continuous Integration/Continuous Delivery while keeping secrets safe.
At initial deployment you are greeted with the install page. The problem is that the installation process writes a a bunch of files to on server, including datasettings.json, where the connection string to the DB is hard-coded.
This means that when I deploy nopCommerce to Azure App Service, for deployments after installation, I have to make sure NOT to delete "additional files on the server" or the config will be deleted, since these config files written by the installer, are not in source control.
It is really impractical not to be able to use standards ASP.NET connection strings, environment variables or KeyVault.
To answer your question on how you do transformation on the config file, one possibility is to use a PowerShell script to read, transform, and write the config file directly on the App Service instance. There is an API for that.
https://blogs.msdn.microsoft.com/gabeshapiro/2017/01/01/samples-for-using-the-azure-app-service-kudu-rest-api-to-programmatically-manage-files-in-your-site/
https://github.com/projectkudu/kudu/wiki/REST-API
Alternatively, you can modify the source to read from Web.Config:
Change the connection string of nopCommerce?
Symfony introduced a new Dotenv component since Symfony 3 which allows us to handle environment variables as application parameters. This looks really nice and it's the best practice to follow according to 12factor app manifesto.
Now, regarding Symfony 4 they went further by pushing forward this practice and that's why I started using environment variables via the .env file.
And then I wanted to deploy and I realized that the .env file must not be persisted on the server as it would be the same as having a parameters.yml file.
So I've been digging into the documentation a bit and I found this article which explains that we can directly create environment variables via some webserver directives. That's great for code being executed via FPM but it does not tell us how to handle environment variables when running a command via the CLI for instance.
How can I achieve this ?
Should there be an equivalent of a .env file stored somewhere? But then parameters would be duplicated ?
I'm welcoming any help ;)
Finally had the time to check the link Neodan posted and everything is in there!
So for those of you wondering what to do, simply edit the /etc/environment file and add your variables. Then reboot your server and all your processes will have access to these variables.
I guess that's the simplest solution. The only drawback of this method is that these variables are available by any process / users but that's ok as far as I'm concerned.
If you want a more secure solution I suppose that you could, as I stated before, configure your webserver to add environment variables and export them via your .bash_profile or .bashrc file but be careful about how you start your shell (when deploying your application for instance). It's more complicated to maintain and prone to errors I'd say.
N.B.: You also might want to be careful about how you name your variables to prevent collisions.
I am starting to play around with MVC 6 and I am wondering, with the new config.json structure... are my connection strings safe in the config.json file?
Also, I was watching a tutorial video and I saw the person only put their connection strings in their config.dev.json file, not just the config.json. This will mean the application will not have the connection strings while on the production side, correct? He must have meant to put them in both.
Thanks a lot for the help!
I think the Working with Multiple Environments document sums it up pretty well.
Basically, you can farm secret settings such as connection strings out into different files. These files would then be ignored by your source control system and every developer will have to manually create the file on their system (it might help to add some documentation on how to setup a project from a fresh clone of SCC).
For production, the compile will include the production settings. Typically, these are provided by a build server where they are locked away from developers. I'm not sure if that is totally automatic with MVC core or you have to add some kind of build step to do it, but that is how it is normally done.
If you are worried about storing connection strings in the production environment securely, you can extend the framework with your own configuration provider.
During our development of schemas orchestrations, ports, etc. We've been exporting MSI's and binding files for deployment into our test and ultimately production environment
So, for example, we set up a series of receive ports/locations in a single BizTalk app, for the purpose of receiving all HL7 v2 messages from our HCIS. We then exported that to a bindings file, and imported into test.
Then, as we developed new schemas, we exported each schema into it's own msi file and deployed that into the same BizTalk application in our test environment. We did that because the schemas are specific to the inbound messages from our HCIS.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application. So if I want to uninstall and re-install a particular schema, I'm not sure what will happen. For some reason, I half expected to see an entry for every msi I installed, but I suppose that because they're all going into the same BizTalk application, they are all registered in windows as the same application. I'm betting there is a better way to do this, any suggestions?
You can, and probably should, create different applications for each logical grouping of code. If you examine the 'deploy' section of the project properties you'll see a text box to enter your application name. When you trigger a deploy they will be placed into a separate application with the name you provide. You'll see it in the BizTalk management console.
We deploy to dev using the framework mentioned below. Then to deploy to QA right click on the application and create an MSI from that point. It will allow creating an MSI for only one application.
NOTE: the deploy setting is NOT saved globally. If another developer opens the project his project will not inherit the application name you've set.
We use the biztalk deployment framework to help manage changes when we do development.
So now, in test, we've ended up with a BizTalk application with the receive ports and schemas we need to receive messages from our HCIS. The issue I discovered is that, if I look at the installed programs list in the control panel, I only see 1 application.
I can only think of two scenarios where you might observe this behaviour:
You have multiple different MSI's (once for each schema) which you are importing into BizTalk (and hence they are appearing in the BizTalk Admin Console), but you are not running the MSI on the local machine (and so it is not appearing in 'Installed Programs'); or
You MSI's are all named the same, in which case after the import into BizTalk and the local install, you only have a single program visible in 'Installed Programs'.
I'm betting there is a better way to do this, any suggestions?
With regards to approach, you are certainly along the correct lines. I tend to advise clients to group logical artifacts into a single logical bucket - either project or Application - that can be deployed (and redeployed) without affecting other parts of the system.
In a HL7 scenario, one logical bucket might be Patient artifacts (schemas and supporting maps) and a second may be Financial artifacts (schemas and supporting maps). These logical buckets can either be deployed to different BizTalk Applications, or the same BizTalk Application depending on your requirements. However, the main benefit here is that they are separate and therefore all artifacts do not need to be redeployed if you need to make a small modification to A19 - Patient Query/Response schema for example.
How to deploy is another question entirely. I'm a massive fan of MSBuild and have written comprehensive build scripts that I tweak and reuse for each project I work on. These deployment scripts will tear down an existing environment and re-build from the ground-up, creaing Applications, deploying Resources, importing Bindings, creating Hosts and Host Instances etc. before finally starting the application. This approach removes all human error from the process and tends to be favoured by clients who often have their infrastructure teams perform the deployment rather than their development teams.
I notice that Jay mentioned the use of the BizTalk Deployment Framework. I personally struggle with this tool, partly because I need to maintain my configuration in Excel which I can't check in to source control easily.
I want to do some unit testing on one of my projects. This is a web project, and there will only be one copy of this program running aside from development copies.
I want to write some unit tests that will use the web.config. I understand that ordinarily, a tester would stub out this external dependency because he wants to test the code without the test depending on the web.config holding certain values.
However, the web.config in my project is supposed to always hold certain values and I want to have a unit test that will fail if they are set to invalid values. For example, one of the values is a SQL connection string.
I want to write a test that will read the connection string from the web.config. I envision that the test could connect to a server with the connection string and perhaps perform a very simple command like SELECT system_user;. If the command executes successfully and returns something the test passes. Otherwise, it fails. I want the connection string to be read from the web.config in the project I'm testing.
Of course, the ConfigurationManager will not ordinarily look for a web.config in another project. I could manually copy the web.config from the original project to the test project, but I would have to do that before every test and there is no way I could count on anyone else to do that.
How do I make my test project read the web.config from another project?
It sounds like you are trying to validate settings in web.config, which is a deployment-level concern and is different from unit testing.
Unit testing tells you that your core logic is performing as expected; deployment verification tells you that the application was installed and configured properly and is safe to use. Unit tests are meaningful to developers, deployment verification is meaningful to the end user or administrator that is deploying the app.
In situation like this I like to build a "system console" into my apps. This console contains a number of self-diagnostic checks such as:
Ensuring the connection string(s) are configured properly
Ensuring that any 3rd party services are available and functioning
Ensuring that all configuration settings are valid and won't cause runtime errors (e.g. paths exist, the web user account has read/write access where needed, etc)
I strongly recommend you consider separating this sort of configuration and deployment verification from your unit test suite. Not only will it simplify your work (because you won't have to load a config file from another project) but it's also the sort of tool that customers really, really like :)
You can load and explore other config files with the ConfigurationManager.OpenXXX() methods.
The WebConfigurationManager class specifically has a method for opening web.config files, and the documentation page I linked to has some more code examples. Once you have your configuration object loaded, you can explore it for sections and keys.
var cfm = new ConfigurationFileMap("path/to/web.config");
var config = WebConfigurationManager.OpenMappedWebConfiguration(cfm);
I asked a similar question that you might want to check out:
How do I test that all my expected web.config settings have been defined?
I ended up getting it working but the annoying part is my source control constantly locking the config file that is copied over. You can also rename the web.config to app.config so that it will compile into a non-web project.
It sounds like you're trying to squash a mosquito with a sledgehammer. Why not do this manually, as part of the deployment checklist; have a task to manually confirm the connectionString.
Or if you want to automate it, write a program to check the connectionString, attach it to your Continuous Integration server (assuming you have one) and fail the build if the connectionString is wrong.
Use Unit Tests for what they're intended for testing code, not configuration.
If you want to use original web.config file from your website to your Unit Testing project without copying it then you can Modify VS Local-Test-Settings.
Here is a step by step procedure to use ASP.net website configuration file under Unit Testing project. Follow the link http://forums.asp.net/t/1454799.aspx/1
There is a commercial tool called CheckMyConfig for validating .NET config files, which identifies settings within a given config file, and attempts to validate them.
Possible setting types include database connection strings, files, folders, IP addresses, hostnames and URLs.
The tool allows you to perform a number of checks including opening database connections, accessing folders, requesting a particular URL etc.
There is a standalone version, but the tool also has Visual Studio integration and a simple API that you can use to embed the tool within your own apps, in order
to perform a config 'sanity check' at app startup time.
In your unit testing project, add an app.config file and add the settings from the web.config file that you would like to use for your tests.