Flyway two migration files location - flyway

I have two locatoins for migration files.
1. SQL files: src/main/resources/db.migration.
2. Java files: src/main/java/com.xx.yy.db.migration
I use this code:
location = classpath:db/migration/dev,com.xx.yy.db.migration
Flyway flyway = new Flyway();
flyway.setLocations(location);
flyway.setInitOnMigrate(false);
flyway.setDataSource(dataSource());
flyway.setInitOnMigrate(true);
flyway.migrate();
It doesn't work.
It does work if I use only one (doesn't matter which one).
I tried:
classpath:db/migration/dev,classpath:com.xx.yy.db.migration --> does not work.
classpath:db/migration/dev --> works
classpath:com.xx.yy.db.migration --> works
What am I doing wrong?
Regards, Id

Never mind.
I fixed it by giving it an arrays of strings instead of one string.
Thanks

You should put filesystem first, like:
flyway.setLocations( "filesystem:/home/../../db/migration/" );
So flyway will recognize your directory.

Related

How to use `migration.jar` in Corda Enterprise 4 to run tests?

While using Corda Enterprise 4.2, we have managed to create a
biz-generated-migration....jar which seems to be a pre-requisite to everything, even
when using an H2 database. We were not able to use this .jar file
while executing our tests, even when referencing this brand new .jar
file in our test setup, like this:
val BIZ_COMPONENT_VERSION="1.0.0"
val DEFAULT_MOCK_NETWORK = MockNetwork(
cordappPackages = listOf(
"package.subpkg-infra.cd.contract",
"package.subpkg-infra.contract",
"package.subpkg-infra.flow",
"package.subpkg-infra.cd.flow",
"package.subpkg-infra.cd.pend.flow",
"package.subpkg-infra.schema",
"package.subpkg-cordapp:biz-generated-migration:$BIZ_COMPONENT_VERSION",
"package.subpkg-cordapp:biz-component-base:$BIZ_COMPONENT_VERSION",
"package.subpkg-cordapp:biz-component-core:$BIZ_COMPONENT_VERSION",
"package.subpkg-cordapp:biz-component-interact:$BIZ_COMPONENT_VERSION"
),
notarySpecs = listOf(MockNetworkNotarySpec(DUMMY_NOTARY_NAME)))
Every test fails, complaining about not finding the migration for the
schema.
How can we accomplish to use this generated migration.jar file to
enable our testing ? Or is this approach completely misused ?
Following options have been suggested to fix the issue.
Have you overridden the "migrationResourse"="migration/filename" of MappedSchema class.
make sure to proper XML so this would be not H2 specific.
also could you try to use below network = MockNetwork(MockNetworkParameters(listOf( findCordapp("com.deqode.contracts"), findCordapp("com.deqode.flows"))) ) instead of cordappPackages.
Can you confirm if you are using Windows

How to pass a database prefix to DDEV?

One of the DDEV sites I manage uses a database that includes a prefix. The default behavior for DDEV is to recreate the settings.ddev.php on every start. But that obviously overwrites anything added, purging any manual addition of the prefix.
Is the assumed solution to stop DDEV from overwriting the file? Or to create another settings file (like settings.local.php) to override what's been overridden? Or am I missing something?
This just seems like something that would exist as a simple variable in the config to generate a more accurate settings.ddev.php file. Thanks!
There are a few straightforward answers:
Don't let ddev fiddle with settings at all. Change the project type to 'php' and ddev won't mess with it.
Make the changes you want to db settings in settings.php after the inclusion of settings.ddev.php. That should work no matter what. And it should work on your prod site as well.
Do the work in settings.local.php, but include it after settings.ddev.php in your settings.php file
Take over settings.ddev.php and do whatever you want with it. This just means deleting the line that contains #ddev-generated in settings.ddev.php. After that, ddev won't muck with it at all.
I decided to use a version of the second suggestion:
// Automatically generated include for settings managed by ddev.
$ddev_settings = dirname(__FILE__) . '/settings.ddev.php';
if (getenv('IS_DDEV_PROJECT') == 'true' && is_readable($ddev_settings)) {
require $ddev_settings;
$databases['default']['default']['prefix'] = "drupal_";
}
I just added the $databases line. The rest was already there.

How to add new column to exitsting table symfony - orocommerce

I'm working on an orocommerce project, and it use vendor/oro/bundles/bundle_name
And in that bundle, it have an entity named "oro_customer_user", so i want to add a new column in that table using my new bundles.
I've searched a lot but still no luck.
Almost of solution say i need to fix in vendor/oro/bundles/bundle_name, which i don't want to do.
But still have some solution say i need to use DoctrineMigrationsBundle but i'm not sure about this.
https://symfony.com/doc/master/bundles/DoctrineMigrationsBundle/index.html
Please give advice, thanks :)
Alright, so i've found the solution for orocommerce
Just use Migration and everything is gonna be fine
You can check in this link:
https://forum.oroinc.com/orocrm/orocrm-programming-questions/topic/add-custom-field-into-orocrm-entity#post-24765
The migrate you have to create manual, because i don't know how to create migrate by command line :( .
After create migrate done, you only need to run this cmd:
php app/console oro:migration:load --show-queries
Now go and check in database, its done already.
And about entity:
I still don't know how to custom it in orocommerce. It got a lot of error.

Doctrine migration couldn`t find non-existed class

I use doc:generate-migrations-diffto generate migration classes located in lib/migrations/. You already might know, that doc:generate-migrations-... tasks create some files in tmp directory. I had some problems with it and i delete all doctrine help files from tmp dir.
And now when I execute doc:generate-migrations-diff it fails with this message: Couldn't find class ToPrfxProduct2Site, I have Product2Site class, but there is no ToPrfxProduct2Site.
Any ideas?
Ok, i figured this out. Generating of migrations is based on existing models, so first try to find models that dont exists in your schema. For me there was Product2Site, Product2SiteTable' andBaseProduct2Site` models. Just delete this files and everything will be fine.

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

Resources