Is it possible to prevent the beforeMigrate callback script from running when there are no migrations to run because the schema is already up to date?
Here's the code (executed on application startup):-
Flyway flyway = new Flyway();
flyway.setDataSource(url, user, password);
flyway.setLocations(scriptsLocations);
flyway.setPlaceholders(placeHolders);
flyway.setBaselineVersionAsString("7.0");
flyway.setBaselineOnMigrate(true);
flyway.migrate();
According to the log Flyway runs the beforeMigrate callback before deciding the schema is up to date and there are no migrations to run.
INFO: Flyway 4.0.3 by Boxfuse
INFO: Database: jdbc:oracle:thin:... (Oracle 11.2)
INFO: Successfully validated 8 migrations (execution time 00:00.023s)
INFO: Executing SQL callback: beforeMigrate
INFO: Current version of schema "...": 7.0.7
INFO: Schema "..." is up to date. No migration necessary.
Would like the beforeMigrate callback to run only when migrations are necessary.
Found a simple solution; use info to determine if there are pending migrations and make the call to migrate conditional based on the result: -
boolean pending = flyway.info().pending().length > 0;
if (pending) {
flyway.migrate();
}
Related
I'm trying to setup Flyway for Google Cloud Spanner (beta) using the flyway gradle plugin but it gets the error below when executing ./gradlew flywayinfo.
> Error occured while executing flywayInfo
No database found to handle jdbc:cloudspanner:/projects/<my-project>/instances/<my-instance>/databases/<my-db>
build.gradle
plugins {
id 'java'
id 'org.flywaydb.flyway' version '7.13.0'
}
...
dependencies {
implementation(
'org.flywaydb:flyway-gcp-spanner:7.13.0-beta'
)
}
flyway {
url = 'jdbc:cloudspanner:/projects/<my-project>/instances/<my-instance>/databases/<my-db>'
}
The values in the url correspond to my project and instance names.
I've also tried:
using a service account key in the end of the URL
adding the com.google.cloud:google-cloud-spanner-jdbc:2.3.2 JDBC driver dependency (implementation)
I'm behind a proxy but I have set it in my gradle.properties with systemProp.http.proxyHost and systemProp.http.proxyPort (also for https)
Using Flyway CLI and the API programmatically works.
It seems like the error comes from flyway implementation here. Your issue seems somewhat similar with https://github.com/flyway/flyway/issues/3028.
Consider opening a new issue here: https://github.com/flyway/flyway/issues
I am new to Janusgraph and using Cassandra as the backend database. I have a query which uses find all incoming edges to a node. For that I need to make the read consistency to ONE in Janusgraph configuration. I have tried the following configuration but am not able to get the correct read consistency:
public static JanusGraph create() {
JanusGraphFactory.Builder config = JanusGraphFactory.build();
config.set("storage.backend", "cassandrathrift");
config.set("storage.cassandra.keyspace", "cs_graph");
config.set("storage.cassandra.read-consistency-level","ONE");
config.set("storage.cassandra.write-consistency-level","ONE");
config.set("storage.cassandra.frame-size-mb", "128");
config.set("storage.cassandra.thrift.cpool.max-wait", 360000);
config.set("storage.hostname", "10.XXX.1.XXX");
config.set("connectionPool.keepAliveInterval","360000");
config.set("storage.cql.only-use-local-consistency-for-system-operations","true");
graph = config.open();
System.out.println("Graph = "+graph);
traversalSource = graph.traversal();
System.out.println("traversalSource = "+traversalSource);
getAllEdges();
return graph;
}
However, the client is still showing the CassandraTransaction in QUORUM level of consistency.
Here are the logs:
16:40:54.799 [main] DEBUG o.j.d.cassandra.CassandraTransaction -
Created CassandraTransaction#25e2a451[read=QUORUM,write=QUORUM]
16:40:54.800 [main] DEBUG o.j.d.cassandra.CassandraTransaction -
Created CassandraTransaction#1698ee84[read=QUORUM,write=QUORUM] All
edges = 100000 16:40:55.754 [main] DEBUG
o.j.g.database.StandardJanusGraph - Shutting down graph
standardjanusgraph[cassandrathrift:[10.70.1.167]] using shutdown hook
Thread[Thread-5,5,main] 16:40:55.755 [main] DEBUG
o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#3e5499cc[read=QUORUM,write=QUORUM] 16:40:55.755
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#67ab1c47[read=QUORUM,write=QUORUM] 16:40:56.113
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#6821ea29[read=QUORUM,write=QUORUM] 16:40:56.542
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#338494fa[read=QUORUM,write=QUORUM] 16:40:56.909
[main] INFO o.j.d.c.t.CassandraThriftStoreManager - Closed Thrift
connection pooler.
Any suggestions on how to change this to ONE or LOCAL consistency level??
For one, I would switch to connect over CQL instead of Thrift. Thrift has been deprecated, so it's not seeing the benefits of any improvements for bug fixes. In other words, if it's inherently broken, it won't be fixed. So you're much better off using CQL.
config.set("storage.backend", "cql");
config.set("storage.cql.keyspace", "cs_graph");
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
Secondly, you need to make sure that you're consistently using the config properties for your storage backend. Unfortunately with JanusGraph and Cassandra, these are easy to mix-up...
config.set("storage.cassandra.read-consistency-level","ONE");
config.set("storage.cassandra.write-consistency-level","ONE");
....
config.set("storage.cql.only-use-local-consistency-for-system-operations","true");
In the above example, you've set properties on storage.cassandra (Thrift) and the storage.cql (CQL) configs.
If that still doesn't work, try adding this setting as well:
log.tx.key-consistent=true
Setting the transaction log to be key-consistent overrides it's default QUORUM consistency access, if that's what is showing as QUORUM.
I have a setup in which migrations from previous scripts were removed.
The flyway configuration specifies that ignoreMissingMigrations is true.
However, Flyway fails with the following error
Validate failed: Detected applied migration not resolved locally: version_x
where version_x is the first version that was removed after baseline.
Why do I get this error although ignoreMissingMigrations is true ?
Note: Flyway version: 4.2.0
The problem comes from a special setup that Flyway is unable to handle correctly.
We have no newer applied migration, thus Flyway see this migration as a future migration instead of a missing migration. Thus the solution is to set ignoreFutureMigrations to true in addition to ignoreMissingMigrations.
We have deployed a groovy script to the artifactory home plugin folders.
using the REST API we have loaded it successfully.
from the logs we can see that the load is successful.
2017-11-14 10:00:54,815 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.GroovyRunnerImpl:244) - Loading script from 'purgeLibrary.groovy'.
2017-11-14 10:00:55,015 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.e.ExecutePluginImpl:187) - Groovy execution 'purgeLibrary' has been successfully registered.
2017-11-14 10:00:55,023 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.j.JobsPluginImpl:92) - Groovy job 'purgeOutdatedArtifacts' has been successfully scheduled to run.
2017-11-14 10:00:55,024 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.GroovyRunnerImpl:296) - Script 'purgeLibrary' loaded.
and using the REST API again, we have manually executed the script purgeLibrary (again verified though log messages).
Job purgeOutdatedArtifacts and execution purgeLibrary are both wrappers around the same internal method, but job has default parms.
However, this 'job' never actually executes - again we can tell because there is nothing in the logs.
The relevant 'hook points' below...
executions {
purgeLibrary() { params ->
def dryRun = params["dryRun"] ? params["dryRun"][0] as boolean : false
libraryPurge(dryRun)
}
}
jobs {
// Finds ci/cd published Artifacts that have reached max daysToLive and purges them. Executes daily at 1am, server time.
purgeOutdatedArtifacts(cron: "0 0 1 * * ?") {
libraryPurge(true) // default dryrun flag to true
}
}
Now: all of this works in our test server - also same version of Artifactory. So my assumption is there is a configuration difference on the production server that is missing / not set correctly.
Any idea why the 'job' does not actually execute?
Thanks!
So it turns out the job is actually running, but doing so at a different start time that equates to the value that was originally in the script.
I am still researching why it has not updated start time when we reloaded the script.
I am not deleting the question, in case this adds value to someone else
Can Flyway be used to mix creation and migration scripts so that:
new installations run a schema creation script
existing installations run migration scripts, and never see the creation scripts of subsequent versions
?
E.g. given:
db/create/V1/V1__schema.sql
db/create/V2/V2__schema.sql
db/create/V3/V3__schema.sql
db/migration/V1/V1.1__migrateA.sql
db/migration/V2/V2.1__migrateB.sql
db/migration/V2/V2.2__migrateC.sql
An existing V1 installation would run the following to get to V3:
db/migration/V1/V1.1__migrateA.sql
db/migration/V2/V2.1__migrateB.sql
db/migration/V2/V2.2__migrateC.sql
It would never run the following, as these represent schema-only SQL produced by mysqldump:
db/create/V2/V2__schema.sql
db/create/V3/V3__schema.sql
A new V3 installation would run:
db/create/V3/V3__schema.sql
The above conflicts with the approach recommended by Upgrade scenario when using Flyway but is required as data is populated independently of the migration.
It looks like it should be possible to use flyway.locations to support this, but installations would always need to include the path to their creation script so that Flyway can see it.
The alternative appears to be to run the creation scripts outside of Flyway and set a baseline, but it would be nice if Flyway could manage everything.
In the end, I developed a tool to do this.
The tool has the latest schema in:
db/schema/schema.sql
and the migration scripts in:
db/migration/<version>/<version>.<sequence>__<JIRA issue>.sql
e.g.:
db/migration/V1/V1.1__JIRA-15.sql
db/migration/V2/V2.1__JIRA-12.sql
db/migration/V2/V2.2__JIRA-22.sql
db/migration/V3/V3.0__JIRA-34.sql
If the database has no tables, schema.sql is executed, and then flyway is baselined with the most recent version, as reported by Flyway's
MigrationInfoService.pending() method.
i.e. the last MigrationInfo element returned by pending() determines the version to pass to Flyway.setBaselineVersion() before invoking Flway.baseline()
e.g:
DbSupport support = DbSupportFactory.createDbSupport(connection, true);
Schema schema = support.getOriginalSchema();
if (schema.allTables().length == 0) {
Resource resource = new ClassPathResource("db/schema/schema.sql", getClass().getClassLoader());
SqlScript script = new SqlScript(resource.loadAsString("UTF-8"), support);
script.execute(support.getJdbcTemplate());
MigrationInfo[] pending = flyway.info().pending();
MigrationInfo version = pending.length > 0 ? pending[pending.length - 1] : null;
if (version != null) {
flyway.setBaselineVersion(version.getVersion());
flyway.setBaselineDescription(version.getDecription());
flyway.baseline();
}
}
This ensures that none of the migration scripts are invoked for newly created databases, but does mean that schema.sql must already contain all of the changes.
If the database has tables, but no Flyway information, it is baselined according to the detected schema version.