I have this structure of migration of code first approach
I deleted the previous database and wants code first to create it again, but when I am doing update-database it is not running all the scripts and leaving first three migrations and because of it, it is failing because all the dependencies are not present
You see it is just picking up three migrations leaving the first three.
Earlier this used to work. What wrong might be going on? Somewhere it is still storing that the first three are already run while they are not since database itself doesn't exist and it is creating the database for the first time
Found the reason.
I changed the namespace of last three and so it was picking up only those and not the first three.
Just changed the namespace of first three also and it worked :)
Related
In .net core microservices, another team works on open source modules, and I extend their modules in my project. I already added one column in a Entity and then same column is added by open source team. Now duplicate column error is showing.
I can not alter open source migration files and my column is already in production.
How to resolve this issue please suggest.
i think we are dealing with multiple smells here.
each microservice should have its own Datarealm
by the sound of it you extended a Table of an object which was generated by the opensource service
But this does not help you, the question is can you merge the Data which is in this column.
If so you might get away with a simple copy/update.
What you need todo is:
Create a new Column With a different Name
Copy your existing Data into the new column
Drop your column
Execute your Migration
Copy your Data back into the column
Test your application very carefully, if the logic which you have implemented for the from you generated Data works as intended
Drop your backup column
Depending on the amount of Data this will lead to a downtime, so plan ahead and have a rollback strategy ready if something goes wrong.
Personal Opinion to Prevent those Smells
Every time i needed an opensource project in my projects in the past i wrote a wrapper around it, this has multiple benefits.
For one if the api of the project changes you only have to update it in exactly one place, which improves the maintainability.
Because it has a wrapper it automatically gets a own Database and if i need to extend an entity which i get from one of those opensource projects, i usually do it via Foreign Keys with a different Table. Which then gets linked via a view.
Yes this costs some performance, but in the end it was worth it every time.
I'm experiencing the following issue:
org.flywaydb.core.api.FlywayException: Validate failed: Detected applied migration not resolved locally: 1.44
That happened when I understood that the data that I added in 1.44 is invalid and I don't want to deal with it on old environments, but on new environments I don't want this data. I want the data that I will insert within new migration(e.g. 1.48).
If I change old migration it will fail on old envs due to checksum;
If I leave it, on new envs I will have invalid data from 1.44;
How can I delete it so that I accomplish what I need and not to get an error? What's the correct way?
This questions is related to Best pratice: How to modify flyway migration script after it has been used
The basic answer is: don't delete once it's been applied
It doesn't matter if during the migration process some intermediary state isn't exactly what you want, as long as the final one (1.48 in your case) is correct.
Now if you really need to delete this migration, ask yourself whether replacing it with an empty file could also do the job. If yes you could then follow the advice I gave here: https://stackoverflow.com/a/35491545/350428
Now if that's still not enough and you really really need to delete this migration, delete the file and patch up the flyway_schema_history table by hand to make it consistent again. This is risk-prone and should be an absolute last resort solution.
you can UNDO your migration: UNDO flyway migration
I'm using code migrations in code first. I modified one of my models (domain class) and changes were picked up when I added a migration. So far so good. Now is the part where I may be doing something wrong:
I made additional changes making the previous modification irrelevant.
I deleted the migration and tried to re-scaffold, I got an empty migration. I understand now that the database tracks migration history. So I understand why this was stupid.
My next step was to delete the rows in the database- it also sounds like this is not a good idea. The appropriate way to "step-back" is to do Update-Database -Target:{MigrationName}
Trying both of these I am still getting an empty migration. I also tried dropping the database and updating to the migration before mine.
My google/SO foo has failed me. What I think is happening is that VS/Entity is not detecting changes in my model and I'm not sure what the trigger is. Can someone help me out?
I had a similar problem. My solution was to just go forward with a substance-less migration happening. The only change it makes is adding a record to the __Migration table, but I am living with that and just moving forward.
What are the practical advantages of Doctrine Migrations over just running a schema update?
Safety?
The orm:schema-tool:update command (doctrine:schema:update in Symfony) warns
This operation should not be executed in a production environment.
but why is this? Sure, it can delete data but so can a migration.
Flexibility?
I thought I could tailor my migrations to add stuff like column defaults but this often doesn't work as Doctrine will notice the discrepancy between the schema and the code on the next diff and stomp over your changes.
When you are using the schema-tool, no history of database modification is kept, and in a production/staging environment this is a big downside.
Let's assume you have a complicated database structure in a live project. And in the next changeset you have to alter the database somehow. For example, your users' contact phones need to be stored in a different format, not a VARCHAR, but three SMALLINT columns for country code, area code and the phone number.
Well, that's not so hard to figure out a query that would fetch the current data, separate it into three values and insert them back. That's when migrations come into play: you can create your new fields, then do the transforms and finally drop the field that was holding the data before.
And even more! You can even describe the backwards process (the down migration), when you need to undo the changes introduced in your migration. Let's assume that someone somewhere relied heavily on the format of the VARCHAR field, and now that you've changed the structure, his piece of code is not working as expected. So, you run migration:down, and everything gets reverted. In this specific case you'd just bring back the old VARCHAR column and concatenate the values back, and then drop the fields.
Doctrine's migration tool basically does most of the work for you. When you diff your schema, it generates all the necessary up's and down's, so only thing you'll have to do is handle the data that could be damaged when the migration is applied.
Also, migrations are something that gives other developers on your team knowledge on when it's time to update their schemas. With just the schema-tool, your teammates would have to run doctrine:schema:update each and every time they pull, `cause they wouldn't know if the schema has really changed.
When using migrations, you always see that there are some updates in the migrations folder, which means you need to update your schema.
I think that you indeed nailed it on Safety. With Migrations you can go back to another state of the table (just like you can do in Git version control). With the schema update command you can only UPDATE the tables. There is no log kept for going back in case of a failure with already saved data in those tables. I don't know exactly, but doesn't a migration also saves the data of the corresponding table that's being updated? That would be essential in my opinion, otherwise there is no big reason to use them.
So yes, I personally think that the main reason for using migrations in a production environment is safety and maybe a bit of flexibility. Safety would be the winner here I think :)
Hope this helps.
edit: Here is another answer with references to the Symfony docs: Is it safe to use doctrine2 migrations in production environment with symfony2 and php
You also cant perform large updates with plain doctrine migration. Like try to update index on 30 mln users database. As it will a lot of time while you app will not be accessible.
My project contains a lot of objects like views and stored procedures which are being changed quite frequently. Now I have to create new SQL script on every update which contains complete source code of changed objects despite I've actually changed only few rows. It leads to massive code duplication and I also found it difficult to review these changes.
I'd like to have only one actual version of SQL script for every object like view or procedure and recreate these objects every time I redeploy the database. As result I could change existing source file (like in Java or C programming) instead of creating a new update every time I need to alter view or procedure.
Is there a possibility to execute some scripts every time I migrate the database with Flyway?
I'm not sure why that got so many downvotes, it's a perfectly understandable and valid question. Perhaps it's because it closely resembles this open question:
Migrating Stored Procedures with Flyway
We are actually starting to push against this issue now. We've been using flyway for development and testing (and love it). But we've come to a point where we're starting to have to use procs/triggers/views (p/t/v's) and the fundamental disconnect between how we did it before, and how we must use flyway, is starting to be a strain.
Before, for a given database object (let's say it's a procedure), there'd be one source file. And if you needed to change the proc 'n' times, there would be 'n' versions of the same file in your VCS. Diff tools work great, IDE's all understand this, merges detect when two developers working in separate branches make changes to the proc, etc, etc. You know, old school.
But with flyway, any one proc with 'n' changes is now scattered across 'n' files. Instead of "one object in one file with 'n' versions", you have "one objecct in 'n' files with one change each". I now need to do a text search in my IDE for any instance of "proc_name" if I want to know the history of changes to the proc. The VCS knows nothing about it. Devs can each make a migration in their own branches that succeed when each is deployed, but leave the proc with a missing update.
I'm not saying any of this to complain about flyway, and I fully realize it's not a simple area. I'd almost say it's unsolveable (by flyway).
We're scheming how to handle this problem, and I'd be very interested to know how others have handled it.
Repeatable migrations are supported by Flyway 4.0, now.
Just add sql files starting with "R" without any version information to your migration folder:
R__Blue_cars.sql
You have to ensure, that the script could be repeatable migrated.
This is usually done by "CREATE OR REPLACE" clauses in your DDL statements.
https://flywaydb.org/documentation/migration/repeatable