In Flyway is there any way to mark certain scripts for inclusion or exclusion? What I am looking for is something similar to Liquibase's 'contexts' feature, or Dbmaintains 'qualifiers'. My primary use case is the test one that is outlined on the Liquibase site, a 'test' context where only scripts that are related to test data will run.
Yes. Put those test scripts in a second folder, and selectively configure flyway.locations to include it.
Related
I currently have a Kotlin-Exposed project that I would like to add Flyway to. The problem I am having is that most documentation and answers online indicate that the best way to add Flwyay to an existing schema is to just have the first script be a data definition script. This usually would work, but since I'm dynamically generating my SQL with an ORM, this doesn't really make sense. Are there ways around this?
I really just want to use Flyway to add/delete persistent data that I will always need in certain tables. I don't want to insert it at the ORM level because if the application is run multiple times, then it can insert the data each time it's run (as opposed to Flyway where it will just migrate the database to the newest constructed state).
I think another way to word this question is: "Can I use Flyway for static data only, and not schema?"
Yes, you can.
Some info:
You are not required to have a first script containing the data definition / "baseline" of the schema. You can simply skip that.
When running Flyway against a non-empty database for the first time, you will still need to run the baseline command. In your case this will simply indicate to Flyway that it can assume the baseline schema is present and that it's safe to run migrations. (Your baseline schema was deployed by the ORM instead of a baseline script -- that's totally fine, Flyway won't check/doesn't care.)
You could choose to write your scripts that insert static data in a way that they are idempotent / use a guard clause so that they don't insert the data twice. That way it would be safe to use at the ORM level if you choose.
We use flyway since years to maintain our DB scripts, and it does a wonderful job.
However there is one single situation where I am not really happy - possibly someone out there has a solution:
In order to reduce the number of scripts required (and also in order to keep overview about "where" our procedures are defined) I'd like to implement our functions/procedures in one script. Every time a procedure changes (or a new one is developed) this script shall be updated - repeatable scripts sound perfect for this purpose, but unfortunately they are not.
The drawback is, that a new procedure cannot be accessed by non-repeatable scripts, as repeatable scripts are executed last, so the procedure does not exist when the non-repeatable script executes.
I hoped I can control this by specifying different locations (e.g. loc_first containing the repeatables I want to be executed first, loc_normal for the standard scripts and the repeatables to be executed last).
Unfortunately the order of locations has no impact on execution order ;-(
What's the proper way to deal with this situation? Right now I need to specify the corresponding procedures in non-repeatable scripts, but that's exactly what I'd like to avoid ....
I found a workaround on my own: I'm using flyway directly with maven (the same would work in case you use the API of course). Each stage of my maven script has its own profile (specifiying URL etc.)
Now I create two profiles for every stage - so I have e.g. dev and devProcs.
The difference between these two maven profiles is, that the "[stage]Procs" profile operates on a different location (where only the repeatable scripts maintaining procedures are kept). Then I need to execute flyway twice - first with [stage]Procs then with [stage].
To me this looks a bit messy, but at least I can maintain my procedures in a repeatable script this way.
According to flyway docs, Repeatable migrations ALWAYS execute after versioned migration.
But, I guess, you can use Flyway callbacks. Looks like, beforeMigrate.sql callback is exactly what you need.
I have two environments. One is development and another is production. Lets say I have folder in production which has all my metadata like ILs, joins, DS, Analysis, scripts etc. Now in development I have the same folder but with new enhancements done.
Now, I want to compare that what are the changes that have been done and as per the result I will be able to understand the impact.
So, could you please tell me that what is way to compare that two folders of development and production environment?
For the requirements posted here, you can create information link on top of LIB_ITEMS table to fetch details of library item details from Spotfire database. An another set of activity is performed at this link, but approach can be used for your requirements as well.
I would like to exclude certain object, for example all logins & users, from extract or publish operation of sqlpackage.exe.
This is possible from within Visual Studio, so I hope it is also possible from the sqlpackage.exe.
Or is it not possible?
The reason is that I would like to be able to auto-deploy to various environments/servers, where the logins & users are different.
NOTE: Logins & Users is only an example, the question is more general.
It is now. Please update the tools and look at this post.
http://blogs.msdn.com/b/ssdt/archive/2015/02/23/new-advanced-publish-options-to-specify-object-types-to-exclude-or-not-drop.aspx
I solved this problem by creating a DeploymentPlanModifier contributor (following their SchemaBasedFilter sample) that I pass-in through in an argument (/p:AdditionalDeploymentContributors) for SQLPackage.exe, it looks for any drop operations of security object types.
(Code on Prevent dropping of users when publishing a DACPAC using SqlPackage.exe)
Your best bet at this point is to look at doing this in post-deploy scripts and excluding all logins/users from your projects. We have similar issues where each environment has a different set of logins/users and SSDT just does not handle this well out of the box. I've written about the process we use on my blog (borrowed heavily from Jamie Thomson).
http://schottsql.blogspot.com/2013/05/ssdt-setting-different-permissions-per.html
I'll also note that the user "pavelz" left a comment briefly describing the process they use w/ composite projects - main project for objects and sub-projects for permissions. That could work as well.
The only issue we have run into with the post-deploy process is if you enable publishing to drop permissions/logins not in the project, you could have some down time until you re-add the permissions at the end. Once set, I highly recommend turning off those options.
Sadly, as of now sqlpackage.exe utility does not have any option of excluding a specific object. However, it does have options to exclude an entire object type.
All of the same options available inside Visual Studio can be used in SqlPackage.exe. See "Publish Parameters, Properties and SQLCMD variables" in the documentation for a full list of options you can pass. They generally look like "/p:IgnoreUserSettingsObjects=True" and are passed alongside the regular arguments when calling SqlPackage.
I have an unusual environment in a project where I have many files that are each independent standalone scripts. All of the code required by the script must be in the one file and I can't reference outside files with includes etc.
There is a common function in all of these files that does authorization that is the last function in each file. If this function changes at all (as it does now and then) it has to changed in all the files and there are plenty of them.
Initially I was thinking of keeping the authorization function in a separate file and running a batch process that produced the final files by combining the auth file with each of the others. However, this is extremely cumbersome when debugging because the auth function needs to be in the main file for this purpose. So I'd always be testing and debugging in the folder with the combined file and then have to copy changes back to the uncombined files.
Can anyone think of a way to solve this problem? i.e. maintain an identical fragment of code in multiple files.
I'm not sure what you mean by "the auth function needs to be in the main file for this purpose", but a typical Unix solution might be to use make(1) and cpp(1) here.
Not sure what environment/editor your using but one thing you can do is to use prebuild events. create a start-tag/end-tag which defines the import region, and then in the prebuild event copy the common code between the tags and then compile...
//$start-tag-common-auth
..... code here .....
//$end-tag-common-auth
In your prebuild event just find those tags, and replace them with the import code and then finish compiling.
VS supports pre-post build events which can call external processes, but do not directly interact with the environment (like batch files or scripts).
Instead of keeping the authentication code in a separate file, designate one of your existing scripts as the primary or master script. Use this one to edit/debug/work on the authentication code. Then add a build/batch process like you are talking about that copies the authentication code from the master script into all of the other scripts.
That way you can still debug and work with the master script at any time, you don't have to worry about one more file, and your build/deploy process keeps everything in sync.
You can use a technique like #Priyank Bolia suggested to make it easy to find/replace the required bit of code.
I ugly way I can think of:
have the original code in all the files, and surround it with markers like:
///To be replaced automatically by the build process to the latest code
String str = "my code copy that can be old";
///Marker end.
This code block can be replaced automatically by the build process, from one common code file.