I have an ASP.NET project. Naturally, through different releases and development branches, the db schema changes.
What are some ways to cleanly handle the schema changes in a friendly way so that I can easily switch between development branches?
I use SQL Server 2005, but general techniques probably work.
One good way to keep track of schema changes across multiple branches of a development project would be to follow a database refactoring process. Among other benefits, this sort of process incorporates the use of delta and migration scripts to apply schema changes to each environment (or branch in your case). The setup could look something like this:
main
src <-- ASP.NET project source
db <-- Database create scripts
delta <-- Database change scripts (SQL delta files)
branch
src
db <-- usually has the same contents as the copy in main branch
delta <-- only the changes necessary for this branch
Every time you need to change the database schema for a particular branch you create a SQL delta script that is used to apply the change. To make it easier I would suggest naming each script file to include create date and time to keep them in sequence. Example would be:
201102231435_addcolumn.sql
201102231447_addconstraint.sql
201103010845_anotherchange.sql
Add the delta files to source control in the branch where the schema change needs to be made. You should end up with each branch containing exactly what is necessary to change the corresponding database. Some details might need to be tweaked for your situation depending on things like your branching scheme and whether or not your database is preserved during your release process (as opposed to re-created).
Finally, to try and make these concepts simple, I would recommend a tool to help manage the process. My suggestion is to take a look at DBDeploy / DBDeploy.NET. I've been happily using it for years on all my projects.
We put our schema changes in source control in the same place the rest of the code being deployed for that version is.
Related
We have a production system with a large DB (several hundred tables) and would like to begin using Flyway to manage DDL changes that occur through the dev cycle. However, the organization is setup in such a way that there are production DB changes that sometimes occur, mostly just data changes but possibly DDL, that will happen outside of a data migration tool. While this is obviously an organizational challenge, does this fact alone cripple a tool like Flyway? Or is there a workflow where Flyway could rebuild its indices on demand such that any out-of-band DB change like this is pulled in?
We'd love to use Flyway, but would need to integrate it incrementally until all teams using the system are trained/bought in.
When introducing Flyway to a DB with existing data you will need to baseline Flyway to integrate with your existing data. See baseline.
For changes made after this, Flyway will only track and version changes made from its own migration scripts and not changes made externally to it. However, this does not mean you cannot use the two together, but you would need to be more aware of your database structure to avoid conflicts between your flyway migrations and external changes.
Transactional data changes made to production shouldn't impact Flyway as these won't be versioned.
If you're referring to static data (eg lookup data) that you'd like Flyway to manage, then this isn't detected by Flyway (at least not today). If you discover that you have drift you'll need to add the changes as a new migration script using idempotent syntax to ensure that next time this runs against production it doesn't try to make the same changes again.
For out-of-band schema changes, The enterprise edition of Flyway has a drift check, so at least you'd be made aware of them. However, as for the data changes described above, you'll need to manually add these schema changes as an idempotent migration script.
We have not used Flyway from the beginning of our project. We are at an advanced state of development. An expert review has suggested to use Flyway in our project.
The problem is that we have moved part of our services (microservices) into another testing environment as well.
What is the best way to properly implement Flyway? The requirements are:
In Development environment, no need to alter the schema which is already existing. But all new scripts should be done using Flyway.
In Testing environment, no need to alter the schema which is already existing. But what is not available in testing environment should be created automatically using Flyway when we do migrate project from Dev to test.
When we do migration to a totally new envrionment (UAT, Production etc) the entire schema should be created automatically using Flyway.
From the documentation, what I understood is:
Take a backup of the development schema (both DDL and DML) as SQL script files, give a file name like V1_0_1__initial.sql.
Clean the development database using "flyway clean".
Baseline the Development database "flyway baseline -baselineversion=1.0.0"
Now, execute "flyway migrate" which will apply the SQL script file V1_0_1__initial.sql.
Any new scripts should be written with higher version numbers (like V2_0_1__account_table.sql)
Is this the correct way or is there any better way to do this?
The problem is that I have a test database where we have different set of data (Data in Dev and test are different and I would like to keep the data as it is in both the environments). If so, is it good to separate the DDL and DML in different script files when we take it from the Dev environment and apply them separately in each environment? The DML can be added manually as required; but bit confused if I am doing the right thing.
Thanks in advance.
So, there are actually two questions here. Data management and Flyway management.
In terms of data management, yes, that should be a separate thing. Data grows and grows. Trying to manage data, beyond simple lookup tables, from source control quickly becomes very problematic. Not to mention that you want different data in different environments. This also makes automating deployments much more difficult (branching would be your friend if you insist on going this route, one branch for each data set, then deploy appropriately).
You can implement Flyway on an existing project, yes. The key is establishing the baseline. You don't have to do all the steps you outlined above. Let's say you have an existing database. You have to get the script that defines that database. That single script should include all appropriate DDL (and, if you want, DML). Name it following the Flyway standards. Something like V1.0__Baseline.sql.
With that in place, all you must do is run:
flyway baseline
That will establish your existing code base as the start point. From there, you just have to create scripts following the naming standard: V1.1xxx V2.0xxx V53000.1xxx. And run
flyway migrate
To deploy appropriate changes.
The only caveat to this is that, as the documentation states, you must ensure that all your databases match this V1.0 that you're creating and marking as the baseline. Any deviation will cause errors as you introduce new changes and migrate them into place. As long as you've got matching baseline points, you should be able to proceed with different data in different environments with no issues.
This is my how-to instruction on integration flyway with prod DB: https://delicious-snipe-938.notion.site/How-to-integrate-Flyway-with-existing-MySQL-DB-in-Prod-PostgreSQL-is-similar-1eabafa8a0e844e88205c2f32513bbbe.
I have an ASP.NET project under git where we follow the convention of using a branch for a feature. We just started using SQL Server Data Tools to manage schema changes (quite new to it, so I suspect it may have features that get me to what I need).
I am looking for some strategies that have worked for other teams that manage switching between branches that have different DB schemas and then successfully merging branches together. Ideally, after merging all the features, I would have implicitly created a change script(s) to deploy for the release to production.
Note I am using SQL Server 2008 R2
There are multiple parts to this strategy. One aspect is the handling of the storage of the different branches, and what has worked well for my teams has been to use different SQL Server instances for each branch (rather than naming individual databases with branch-specific prefixes or suffixes, e.g., MyDatabase_FeatureBranchX, which can get out of hand). This enables the corresponding database(s) in each branch to have the same names (for clarity) but also allows for physical and logical isolation of a given branch's SQL resources (data files, access permissions, etc.).
As for the second, more interesting aspect (which I think is the main intent of your question), you might consider utilizing a code-based "migrations" approach -- e.g., using FluentMigrator or the like. Provided that you've got a standard baseline schema from which each branch was initially created, you can create the appropriate migrations in code as part of your feature development in each branch (and apply them to that branch's SQL instance). When it comes time to merge the branch into trunk, you'd also be merging and then applying that branch's migrations.
At best, this means that you could simply run the migration tool against your trunk instance after the merge, in order to apply all the branch's migrations, since tools like this automatically keep track of which migrations have been applied (via a custom database table) and do not reapply them. Provided that you're also doing periodic merges of your trunk code (including its migrations) into your feature branch throughout its development, and you're applying those migrations, you would also be ensuring that your feature branch's schema is being kept up to date, which minimizes the nasty surprises at merge time.
When it comes time to deploy your trunk to production, these same migrations would be applied once again. FluentMigrator offers various runners: a console application, NAnt, MSBuild, and Rake.
I would highly recommend using a timestamp-based (e.g., 201210241033) migration ID strategy, rather than simple sequential integers (1, 2, ...), to minimize the likelihood of collisions and changes being applied out of the intended sequence.
Having an argument with my team. We are developing an application using SQLite and some want to add it to the repo (GIT) and some don't. Previously with RDBMS system there has been no perceived benefit of using VCS on the DB. However SQLite is a self contained file with no external dependencies so i assume, even though it is binary, that a commit of the project code + the SQLite file will give an accurate snapshot of the state of play at that point.
I also assume that a branch and merge would work as well.
Has anyone actually done this and if so does it work?
You'd get more benefit from GIT's versioning facilities if you stored a dump of the SQLite database (i.e. commands required to create it) rather than the database file itself. That way you could look at the history of the dump file and see tables or data being added etc.
Generally speaking, it's preferable to include full set of dependencies in a VCS repository. This makes your life a whole lot simpler.
If you're after versioning DB schema, check out Wizardby.
The scenario is this. I have a SQL Server database online that I am demoing an application. During development, I have added extra fields, modified field types, changed keys and added some new tables locally.
What's the best way for me to update the online database with the new structure and not lose the data? The database is a SQL Server 2005 one.
Download a trial of Red Gate SQL Compare, compare your two servers and you are done. If you do this often, it is well worth the $400, or get one of their bundles for a better bang for the buck.
And I do not work for Red Gate, just a happy customer!
Write update scripts to modify your live database structure to the new structure, as well as inserting any data which is required.
You may find it necessary to use temporary tables to do this.
It's probably best if you test this process on a test environment, before running the scripts on the live environment.
Depending on what exactly you've done you may be able to get away with alter statements, though from the sounds of it (removing keys and whatnot) you're doing some heavy lifting that may make that a less-than-ideal solution. You should probably look into creating a maintenance plan or, better yet, a SQL Server Integration Services project in Visual Studio. You should be able to migrate the data in the existing database to a new one using those tools.
This probably isn't of huge help retrospectively, but I always script all structural DB changes to my development database and then using a version number to determine the current version of the DB I can run the required scripts on the live DB, hence bringing it back in line at the same time as the new code is uploaded.
This also works for any content changes, for instance if the change in the underlying structure has an effect on the conent stored you can also write scripts to migrate the data accordingly.
Make a copy of the existing database to copy from.
Make another copy and alter it to your new schema. save DDL for reuse.
Write queries that copy data from #1 to #2. Save the queries for reuse.
Check the results.
Repeat until done.