ssdt dacpac: how to structure diffrent versions - sql-server-data-tools

Two sql scripts (update/rollback) are created for each version in current projects. We would like to migrate to DACPAC solution.
Each DACPAC project creates 1 file dacpac at the en so for each version, I create 2 project (1 for update and 1 for rollback). The schema changes will be in dacpac itself while pre-script and post-script are for data migration.
To add a new version, I copy the current update project into new update project and new rollback project. Then modify from there.
Any thoughts please?

I guess this comes down to whether you need to actually do all this work, the way I work with SSDT is to define what I want the current version to look like in the schema + code and any upgrade scripts I need go into the post-deploy files as re-runnable scripts (idempotent).
If a db is on version 1 or 100 they still get the same post-deploy script but either the script checks for the data or a flag that is stored in the database to say that particular script has already been run - that should be pretty easy to setup but depends on your exact requirements.
To keep this manageable it is a good idea to know when particular scripts have gone to all of your environments and then remove them so you only actually have the post-deploy scripts that you are ever going to actually need.
That being said, there are sometimes specific restrictions which are normally:
Customers managing databases so you can't be sure what version they have
Regulatory (perceived or otherwise) requirements demanding upgrade/rollback scripts
Firstly find out how set in stone your requirements are, the goal of SSDT is to not have to worry about upgrade / downgrade scripts (certainly for the schema) - the questions I would ask are:
is it enough to take a backup or snapshot before the deploy?
can you leave off downgrade scripts and write them should you ever need to?
how often are the rollback scripts ever used (also knowing the first two can help here)
If you have a good suite of tests, an automated deployment pipeline (even if it has a manual DBA step in at some point) then these issues become less important and over time as everyone learns to trust the process can become significantly faster and easier to deploy changes.
What are your restrictions?
Ed

If you find that you're investing a fair amount of effort putting logic into a post-deployment script, the chances are that a migrations-based approach (and not the state-based approach) is more suited for you.
Examples are DBUp (open source), ReadyRoll (this is commercial and the one we develop here at Redgate - has additional features such as auto-generation of scripts integration with VS etc).
Migrations-based tools manage the versions (including the table Ed is referring to) on your behalf.

Related

Publishing or updating single DLL in project - is it safe?

Let's say i have ASP.Net WebApi application deployed on production, and we want to update it, but because its a big project and old project we want to update only single Dll's, not whole project.
We have automated process of publishing such things, and we make some regression tests and integration tests. Mainly we do it only in hotfix situation but now we want increse frequency of deployments
So my question is:
is it safe to update single dlls ? what can go wrong ?
I tried to find answer in those places:
Updating a DLL in a Production ASP.NET Web Site bin folder
How to stop C# from replacing const variable with their values?
https://codeblog.jonskeet.uk/2019/06/30/versioning-limitations-in-net/
https://learn.microsoft.com/en-us/dotnet/standard/library-guidance/breaking-changes
I think that if we make hotfix once and after some time make full deployment is not that bad (if we accept the risk), but if we are going to make it normal practice then with each single Dll deplyment risk gets higher than normal full deployment.
I will go out on a limb and offer an answer. There are 2 valid answers to this question. Please consider each.
1) Yes. If your changes are minor (isolated to one project/dll), there are no other updates/upgrades, you have done adequate testing, you've made backups (so you can undo) then yes, it is possible to safely deploy one single DLL without deploying the entire project. Of course there are plenty of things which could go-wrong or surprise you, so be vigilant about monitoring your system(s) after deployment and be prepared to back-out (undo) your changes. Safety-first!
2) What you are proposing is a little cowboy-ish and does not conform to industry "best practices". Nearly anyone with experience would urge you to reconsider your strategy. Perhaps your current work conditions might not give you better options right now. We've all been there. However, to "replay the debt" that you incur by this risk, you need to also create a plan and impose a timeline on yourself, to move away from any future hotfixes like this.

How do I keep compiled code libraries up-to-date across multiple web sites using version control?

Currently, we have a long list of various websites throughout our company's intranet. Most are inside a firewall and require an Active Directory account to access. One of our problems, as of late, has been the increase in the number of websites and the addition of a common code library that stores our database access classes, common helper functions, serialization methods, etc. The goal is to use that framework across all websites throughout the company.
Currently, we have upgraded the in-house data entry application with these changes consistently. It is up-to-date. The problem, however, is maintaining all of the other websites. Is there a best practice or way in which I find out versions on each website and upgrade accordingly? Can I have a centralized place where I keep these DLLs and sites reference them? What's the best way to go about finding out what versions are on these websites without having to go through each and every single website, find out the version, and upgrade after every change?
Keep in mind, we run the newest TFS and are a .NET development team.
At my job we have a similar setup to you, lots of internal applications that use common libraries, and I have spent the best part of a year sorting this all out.
The first thing to note is that nothing you mentioned really has anything to do with TFS, but is really a symptom of the way your applications, and their components, are packaged and deployed.
Here are some ideas to get you started:
Setup automated/continuous builds
This is the first thing you need to do. Use the build facility in TFS if you must, or make the investment into something like TeamCity (which is great). Evaluate everything. Find something which you love and that everyone else can live with. The reason why you need to find something you love is because you will ultimately be responsible for it.
The reason why setting up automated builds is so important is because that's your jumping off point to solve the rest of your issues.
Setup automated deployment
Every deployable artifact should now be being built by your build server. No more manual deployment. No more deployment from workstations. No more visual studio Publish feature. It's hard to step away from this, but it's worth it.
If you have lots of web projects then look into either using web deploy which can be easily automated using either msbuild/powershell or go fancy and try something like octopus deploy.
Package common components using nuget
By now your common code should have its own automated builds, but how do you automatically deploy a common component? Package it up into nuget and either put it on a share for consumption or host it in a nuget server (TeamCity has one built in). A good build server can automatically update your nuget packages for you (if you always need to be on the latest version), and you can inspect which version you are referencing by checking your packages.config.
I know this is a lot to take in, but it is in its essence the fundamentals of moving towards continuous delivery (http://continuousdelivery.com/).
Please beware that getting this right will take you a long time, but that the process is incremental and you can evolve it over time. However, the longer you wait the harder it will be. Don't feel like you need to upgrade all your projects at the same time, you don't. Just the ones that are causing the most pain.
I hope this helps.
I'd just like to step outside the space of a specific solution for your problem and address the underlying desire you have to consolidate your workload.
Be aware that any patching/upgrading scenario will have costs that you must address - there is no magic pill.
Particularly, what you want to achieve will typically incur either a build/deploy overhead (as jonnii has outlined), or a runtime overhead (in validating the new versions to ensure everything works as expected).
In your case, because you have already built your products, I expect you will go the build/deploy route.
Just remember that even with binary equivalence (everything compiles, and unit tests pass), there is still the risk that the application will behave somehow differently after an upgrade, so you will not be able to avoid at least some rudimentary testing across all of your applications (the GAC approach is particularly vulnerable to this risk).
You might find it easier to accept that just because you have built a new version of a binary, doesn't mean that it should be rolled out to all web applications, even ones that are already functioning correctly (if something ain't broke...).
If that is acceptable, then you will reduce your workload by only incurring resource expense on testing applications that actually need to be touched.

Best Practice for maintaining a TSQL database creation script for a web application

We have a ASP.NET web application and need to maintain the database creation and initialization script.
Are there any industry best practices that people know of for maintaining database creation and initialization scripts. I can think of two main approaches.
Maintain a tsql creation script directly by hand.
Maintain a master database and create the script that is then checked into source safe.
Also the script should be able to be tracked through source control, i.e. table order should be controllable.
If possible should also include the ability to track initialisation data either in the same or a seperate script.
Currently we generate the script from management studio but the order of the tables seems to be random.
And the more automated the solution the better.
The problem is not maintaining the script, nor maintaining a 'master' copy of the database. The real problem is upgrading existing database(s). You do your modification in the developer environment, which are then propagated to the test environment, and finally pushed into production environment. While at developer and test environment stage is possible to start from scratch, in production you always have to upgrade the existing deployment.
In my experience the best practice is to use upgrade scripts. This practice is useful even with a single deployed site, but it becomes invaluable with multiple locations that may be at different versions. But even with one single operational site is still useful to be able to test the upgrade repeatedly (starting from backups of current version), keep the changes in source control, have a well formalized and peer reviewed change procedure (the upgrade script). And upgrade scripts can be tailored to specific needs of the operational site, like handling a large table with special care, or deal with encrypted data, or whatever one of the myriad of the details diff based tools neglect or ignore. The main disadvantage is the the scripts have to be written, which require real T-SQL knowledge (forget all the 'designers' in you favorite management tool).
You might want to check out RedGate SQL Source Control.
Are you looking for Visual Studio Database Projects?
I use database projects to store all database objects (tables, views, functions, keys, triggers, indexes across schemas) and keep versioning in TFS. You can build the database to ensure that everything is valid. You can deploy to a fresh database, or do a schema comparison with an existing database.
I also keep all reference and setup data in post deployment scripts which are automatically run after deployment.

Good way to make changes to production database / source code

I'm interested to find out what would be the good way to make changes to production database and source code in web application (ASP.NET, SQL Server 2008).
A little bit more details, we develop on local machines, and then we need to transfer the code and database changes to production (pretty much standard story).
At the moment we do it in the evening, change the database directly from management studio on production server, and then just overwrite the existing asp.net code (copy/past).
You're talking about Release management. What you're asking about is a big subject with a LOT of different answers. The best solution for you is not something we can tell you. There are trade offs to consider.
For example, what you're describing is a very basic release management process that would be considered an "immature" process.... It does not take into account rollback plans, versioning, separation of concerns, proper testing, or any of a hundred other factors that a "mature" release management process involves.
A mature process is very good, but if you don't have the resources, it's not feasible.
To get to the point, I don't think you question can be answered fully here. I'd suggest starting to research "change management", "release management", "Application Lifecycle management", and "Applicaiton Development Lifecycle". I'll have a few good starter links for you in a minute.
Just a forewarning, though, you are asking a question that's going to open your eyes and your world in ways you probably haven't considered. There are things like automated builds to consider, tools to do it for you (high priced, free, and everything in between)
http://en.wikipedia.org/wiki/Release_management
http://en.wikipedia.org/wiki/Application_lifecycle_management
A few simple options for JUST what you're asking about can be found here:
http://msdn.microsoft.com/en-us/library/7hd4c0x3(VS.80).aspx
Also, since you talked about source code without mentioning which source control you're using, I need to say... if you're not already using source control, you need to. You'll wonder how you ever lived without it once you start using it.
Depends on whether it's the first deployment of a new app, or an update to the app.
For small updates, record all your database changes as sql scripts. You must strictly enforce that all changes to development are applied as sql scripts. Put the scripts in source control. Deploy the update by running the scripts on production.
For new apps you may have thousands of scripts. You can't run them individually. Consolidating them into a master script takes too much time. (although you still want to check EVERY script into source control). In this case you reach a milestone in development then FREEZE the development database, and declare it a baseline. Use the database tools to generate a master script(s). Deploy production by running this script(s). Manually create data scripts for your lookup tables to keep it separate from junk dev data.
Avoid a database copy. Avoid changing by hand through the GUI. Scripts are the way. How you go about collecting the scripts, consolidating to master scripts, generating the scripts, etc is another story.

Should we have separate database instance for each developer?

What is the best way for developing a database based application? We can have two approaches.
One common database for all the developers.
Separate database for all the developers.
What are the pros and cons of each? And which one is better way?
Edit: More then one developer is supposed to update the database and we already have SqlExpress 2005 on each developer machine.
Edit: Most of us are suggesting a common database. However if one of the dev has modified the code and database schema . He has not committed the code changes but the schema changes has gone to the common database. Will it not possibly break the other developers code.
Both -
I like a single database that changes are tested on before going live, or going to a 'formal' test environment. This is your developer's sanity check; it stays up to date with the live system and it makes sure they always consider each others changes. The rule should be that changes don't go on here if they might break something else.
A database per developer is great (even essential) when more than one developer is making updates. It allows them all the development flexibility they want without breaking things for other developers.
The key is to have a process for moving database changes from development through to your live system, and stick to your process.
Shared database
Simpler
Less cases of "It works on my machine".
Forces integration
Issues are found quickly (fail fast)
Individual databases
Never affect other developers, but this is also a bad thing, in continuous integration
We use a shared development database and it works out nicely. Our schema rarely changes in a way that makes it backwards incompatible, but occasionally a design change will occur before we go live, and we simply ask the other developers to update.
We do have separate development application (web) servers, but they share the same database. Our developers do have the option to use their own database, as they know how to set this up, and will do that on occasion, but only temporarily. The norm, for us, is to share the database.
Thought I'd throw this out there, but why not let every developer host their own instance of SQL Server Developer on their desktops and then have a shared server for each of the other environments (development, QA, and prod)? I think even the basic MSDN that comes with Visual Studio Pro (if you opt for it) includes a license for SQL Server Developer.
The developer can work on their desktop without impacting the others and then you can have them move the code to the next shared environment as you see fit (at will, with daily/weekly builds, etc.).
EDIT:
I should add that the desktop instance allows developers to do things that he DBAs often restrict on shared environments. This includes database creation, backup/restore, profiler, etc.. These things are not essential but they allow the developer to become so much more productive while reducing the demands they make against your DBAs.
The shared environment is completely necessary for testing - I would not recommend going from desktop to production. But you can add so much by allowing the developers to have 100% control over a given database environment (including isolation from others) with a relatively minor cost.
Depends on your development, testing and maintenance cycles. Also on the size and location of the development team (and of course organization). If you support several versions of the database you might need even more environments.
In real world I found the following approach rather satisfying:
single central database/application for testing purposes, gets all the changes by various developers periodically merged into it
local copies for development (so you are free to drop and reload the whole database)
upgrade scripts are maintained for any changes to schema, auxiliary and sample data sets
Here are some further points:
If two developers (two teams) are working on changes that can affect each other then they should complete their tasks independently and then integrate/merge and test. For this it is much better to have separate development environments (unless they have to work together in which case I consider them to be a part of the same team; still they can work on their own copies of the database and share it if necessary)
If they work on the changes that do not influence each other they could work on the main server. Or on their own local copies of the database.
So, developing on the local copy has all the benefits with no risk in a general case (when you support multiple versions of the system and maintain upgrade scripts anyway).
Still it is great if you can share test cases so ability to dump/restore the database easily and quickly is a big plus.
EDIT:
All of the above assume that having a copy on the local machine of the whole system for testing purposes is feasible (size, performance, licenses, etc).
I would opt for solution #1 : One common database for all the developers.
Pros
Less expensive for the infrastructure;
Only one dump is required when it's time to refresh the development database;
Everyone develops with the same data, so it closely represents the production environment;
Cons
If one developer performs a bad operation, this could impact a larger amount of developers.
As for solution #2 : One independant database for each of the developers;
Pros
This could be useful for new features developments, when development requires isolation;
Cons
More expensive for the company (infrastructure, licences...);
Multiplication of problems caused by eager isolation development environment (works in devloper's environement, not integrated);
Multiplication of dumps by the DBAs of the same copy from the production environment.
Considering the above, I would recommend, depending on your company size:
One database for development;
One database for testing the integration;
One database for acceptance tests;
One for new feature development that will perhaps require integration tests.
If your company doesn't require integration tests, then go with acceptance tests, this step is crucial before going to production.
One per developer plus a continuous integration and build server to run unit and integration tests. That gives you the best of both worlds.
Having all developers modify a single dev database quickly becomes less productive once the amount of database change reaches a certain level because it forces a developer to deploy changes to the shared database before he is ready to check-in, which means other parts of the code line may break unnecessarily.
Simple answer:
Have one development database, and if the developers want their own, they can just run their own instance on their own machines. Just be sure to test/publish on the shared.
We do both:
We use code generation where I'm at and our database is generated as well. So we have an instance on each developer's box where the database is generated. Then we use the scripts that are generated to apply the changes to a central test database. If that goes well we apply the changes to the production database during a release.
What's nice with this approach is that when our "source of truth" is checked in to source control, all the database changes are automatically distributed to the other developers when they rebase and regenerate. It works well for us.
The best way is single database on Test/QA server and one database (probably on developer's local computer) for each developer (so, 10 developers work with 10 + 1 databases).
The same approach as for general development: each developer has own copy of source code on local machine.
Also, multiple-database approach simplifies the keeping database schema in version control systems. We are keeping database creation scripts in SVN.
We are using the approach, described here:
http://www.sqlaccessories.com/Howto/Version_Control.aspx
You might also want to look at Refactoring Databases. Aside from discussing database changes, he includes discussions on going from development to production in a way that reduces risk.
Why on earth would you want a separate database for all developers?
Have one common database for all, that way the table structure is consistent and the sql statements are as well.
The biggest problems with developers having their own databases are:
First it is unlikely to be the size
of the real production database (if
you take all the databases we need to
work with here, they would take up
several hundred gigabytes of space, I
don't have that available on my
machine), this causes bad code to be
written that will never work on a
large database for performance
reasons. SQL code should never be written against a data set significantly smaller than the one on prod.
Second, developers who use their own
database create problems when they
spend a long time developing
something and then find out only
after they merge with a real datbase
that it affects something else. You
find this stuff much faster when you
share the environment. So there is
inthe end less wasted development
time.
Third developers working on related
things need to know about the changes
you are making, it will affect their
change.
When you know you are going to affect others, I think you tend to be more careful what you do which isa plus in my book.
Now the shared database server should have what we call a scratch database, a place where people can create and test table changes, so if they are doing something that might need to drop and recreate a table (which should be a rare case!), they can test the process first by copying the table to the scratch database and running their process there and then changin to the real database when they are sure it works. Or we often copy a backup table to the scratch database before testing a particular change, so we can easily recreate the old data if it goes bad.
I see no advantages at all to using individual databases.

Resources