How to keep a table removed from code - sql-server-data-tools

I have a database project and want to keep a table as a backup on the production database but it shouldn't be part of the code anymore.
Even if I rename the table before generating the deployment script the rename is detected (via a search for named constraints I guess) and the renamed table will be dropped.
Any ideas on that?

It's a bit of a workaround, but if the goal is to prevent this table from being created on new deployments (where it doesn't already exist), but keep it on deployments where it's already been added, then you could keep it in the code and add a post-deploy script to delete it if it doesn't contain any data.
Or you could write your own "plugin" for the database deployment Customize Database Build and Deployment by Using Build and Deployment Contributors

Related

EntityFramwork - How do I use an existing database and apply migrations while still supporting automatic upgrade?

I have the following.
An Entity Framework model that maps against a SQLSERVER database. This has production data, but there is no __MigrationHistory table. I need to keep the data.
A new updated model including new tables, columns etc.
I would like to add migrations so that model changes can be easily managed. I would like to support both MigrateDatabaseToLatestVersion by creating all default tables, as well as being able to apply migration #2 patches on the current data structure.
I have followed Microsofts tutorial "Code First Migrations with an existing database" but this gives me the problem of the initial -ignoreChanges resulting in not being able to create the model from scratch. I need to be able to both recreate it as well as add migrations on the existing database without migration history. This since the -ignoreChanges on the initial migration gives me an empty up(), and the second migration script will only contain code for the patched tables/columns. If I don't ignore changes then the framework will try to create tables that already exist.
I was considering to not ignore changes on the initial migration and then do some kind of "CREATE TABLE IF NOT EXISTS" for all existing tables, but that seems impossible.
I bet I'm missing something right out in the open. What is it?
not tested but it should do:
use old model to create a _MigrationHistory table
copy/paste (select/insert or any etl method) the _MigrationHistory in the production database
update the model to new one
create a new migration
push the migration to the db.
DO ALL THIS FIRST IN A TEST ENVIRONMENT !

Doctrine migrations start in the middle of the project

I am working with Symfony and Doctrine. In the middle of the project I need to implement the Doctrine migrations, because the DB changes were too much and I need a better way to handle it.
How is the best way to start with the migrations, when there is already data on prod and it need to stay there and not to be touched?
My plan will be:
On my test system
drop all tables
run php bin/console doctrine:migrations:diff
The new automatic migration file, which I become holds all the table structures of my current state
go live to the table "migration_versions" and add the ID of this migration, so it will be skipped by the first run of the
migrations
run the migration php bin/console doctrine:migrations:migrate
In this way, I have all the structures from my entities, but I will not destroy my live data.
What do you thing?
If there is already some data on production, then your best take is to do a make:migration or doc:mig:diff from the current schema state. That will generate only the necessary sql that will update the new changes, but nothing else. Your first migration version will countain only the sql to update from the current state, and not from the beginning of times.
And also, once you have adopted this, you have to do every database modification under migrations. For example, if you need to add a non-nullable field, you usually add the new field with not null, then fill all the rows of that field with a default or calculated one, and then alter the table to make the field not nullable. Migrations will generate some boilerplate code to make your life easier, but it also requires a lot of care from the development team. Always test them first in a database that you can get rid of. And you will run into FK constraints and lots of other issues, but basically you have to solve them doing SQL.
Old thread here, but what I do in cases like this:
Backup dev database (structure and data, but with table create statements protected with checking so they are only created if they don’t already exist)
Drop all tables so the database is empty
Generate migration (since database is empty, the generated migration will constitute all commands necessary to generate your entire schema)
Run migration you just generated to build schema
Import test data from your dump
That puts you right back where you started but with an initial migration that can build your schema from nothing.

How can I remove issues with my flyway springboot project?

So while building a new database using our database migration scripts written in a springboot flyway project, we realized we made some mistakes.
Some old scripts need to be changed to ensure that we do not face these issues when we make a new database schema again. These issues are mostly related - an info table was not populated with entries in the project and there are scripts that refer to the data in the migration project -- this data does not exist because we never included a script to include data.
How can we correct this project - the only way I can think of is to correct scripts such that all inserts are replaced by - insert if not exists or replace create statements by create if not exists.
and then delete all entries in schema version and re-run this on all the database which are using this schema.
I cannot go back and correct my script because then the migration project will fail because of checksum issues.
You are rigth, if this project and the scripts are running in some existing projects you can not modify them because the checksum would fail.
Then the cleanest way I can think would be add a file called "DB-GENERAL-FIXES" or something like that, where you can add all SQL validations to restore the DB to a stable status. For the new implementations will be extra work first build it wrongly and then clean it, but if you are sharing the same code in production right now...is the best option

Code First and Existing Database with Data

I'm developing a windows application which uses SQL Server Database. I have different versions of this application and they have different database structure, so I need to migrate database to the latest version on application start. I want to compare the database structure with the application model, then do alter, create or drop commands.
Also I want to use EF Code-First ORM, after some search I've figured it out that there are some useful commands and configs in code first. But the problem is, as I know, all of them drop the existing database and create a new one so the data will be lost while I need the data.
I used these lines in my application start function:
var migrator = new DbMigrator(new Configuration());
migrator.Update();
But after execution this line I will get this exception:
There is already an object named 'SomeTable' in the database.
I know that, it's right and there is that table but in structure is changed! How can I compare the structure and do the rest?
That's not how migrations work. You need a migration for every version of your database so EF can check the __MigrationHistory table and see if it has been applied. If your initializer is set to MigrateDatabaseToLatestVersion your database won't be recreated on model changes.
You could try to recreate the history: roll back to your oldest database, add a migration, add 2nd oldest version changes, create a 2nd migration, etc.
Another option is to add a migration for where you are now, generate a script (update-database -Script) then comment out the stuff that exists in each deployed database before applying it.
Yet another option would be to use the VS Schema compare utility against each database and your current database to get the changes over. Then apply a baseline migration to each (add-migration Initial -IgnoreChanges).
Now moving forward you can generate a series of migrations and your code should work as expected.

Symfony2 doctrine2 migration script into table instead of file

I am working with doctrine:migrations:diff in order to prepare database evolutions.
This command creates files into app/DoctrineMigrations
Thoses files contains sql commands in order to upgrade or downgrade database scructure.
I want to store those sql commands into the database itself. In fact, i have several instances of databases. If sql commands are store into files, it is a big problem.
I have read somewhere that DoctrineMigrations bundle can create a table called "migration_versions", but i do not manage to find where i have read this...
I cannot really understand what you're trying to do.
Migrations are used when your code needs altered database structure. For example, a new table or a new column. These new requirements for a table or column comes from your newly written code, so it's only natural to place the migrations also as a code in your repository.
How and when would migrations even get to your database? How would you guarantee that migration is executed before the code changes, which use that new structure?
Generally, migrations are used in this way:
You develop your code, add new features, change existing ones. Your code needs changes to database.
You generate doctrine migration class, which contains needed SQL statements for your current database to get to the needed state.
You alter the class adding any more required SQL statements. For example, UPDATE statements to migrate your data, not only the structure.
You execute migration locally.
You test your code with database changes. If you need more changes, you either add new migration, or execute migration down, delete it and regenerate it. Never change the migration class, as you'll loose what's supposed to be in the database and what's not.
You commit your migration together with code that uses it.
Then comes the deployment part:
- For each server, upload the code, clear and warm-up cache, run other installation scripts. Then run migrations. And only then switch to the new code.
This way your database is always in-sync with current code in the server that uses that database.
migration_versions database table is created automatically by doctrine migrations. It holds only the version numbers of migration classes - it's used for keeping track which migrations were already run and which was not.
This way when you run doctrine:migrations:migrate all not-yet-ran migrations are executed. This allows to migrate few commits at once, have several migrations in a commit etc.

Resources