Flyway baseline and outOfOrder together - flyway

I'm wondering, how is it possible to make Flyway baseline and outOfOrder work together?
The outOfOrder flag can be used when you are working with branches, so if steps appear in the "past" they are not set to "Ignored" but they are executed after they appear. For example if they are arriving with a branch.
I.e. a database has steps 1.0, 1.1, 1.2 and there is a cherry picked 2.2 patch to it. Then 2.0 and 2.1 comes with the regular release, but they are <2.2, so you have to use outOfOrder to install those.
The problem comes when a new database is created on the branch that contains 1.0, 1.1, 1.2 and 2.2, and a baseline is added. Now, baseline tells Flyway to SKIP everything before. So when 2.0 and 2.1 comes they are skipped, they are not even tagged as Ignored, they automatically become part of the baseline.
So what I'm thinking about is that maybe instead of baseline being an almighty step, it should be instead "fake success"-es on the currently known installations steps. Is there a better way, or how to such a thing?

Related

How can I sort my results by property length?

I have these user vertices:
g.addV("user").property(single,"name", "bob")
g.addV("user").property(single,"name", "thomas")
g.addV("user").property(single,"name", "mike")
I'd like to return these sorted by the length of the name property.
bob
mike
thomas
Is this possible with Gremlin on AWS Neptune without storing a separate nameLength property to sort on?
Currently the Gremlin language does not have a step that can return the length of a string. This is something that may be added to Gremlin in a future version, possibly in the 3.6 release. You can of course do it using closures (in-line code) but many hosted TinkerPop graph stores, including Amazon Neptune, do not allow arbitrary code blocks to be run as part of Gremlin queries. At this moment in time this will need to be handled application side when using Neptune, or as you suggest, using a nameLength property. This is an area where the TinkerPop community recognizes some additional steps are needed and does plan to prioritize this work.

Report Builder TeeChart

I'm developing a Report to one of our customers, using two linked tables, and grouping it on the report by one of the columns.
On the header of the group i need to insert a DPTeeChart, that shows the flow of cash on that group.
The grouping works well, gets all the right data, but the DPTeeChart gets the value of the whole Pipeline, and prints the same, overloaded chart over and over, for each group.
Is there anyway i can print it correctly?
Or is there anyway i can clear the graphic on each groupBreak?
Asked the Digital Metaphors staff about the problem, they said it's because this operation is not supported on the version I use.
Current Version: 15.04 Build 145

marklogic data copy from one forest to multiple forest

I need to copy the Marklogic DB contents (50 million xml docs) from one DB host to another. We can do this by doing a backup/restore. But i need to copy the data available in two forests (25 million each) to 20 forests (2.5 million each) and distribute them evenly. can this be done using xqsync or any other utilities?
I'm in the process of doing the same migration this week. 14M documents from two forests on a single host to a cluster and six forests. We have done a couple trial runs of the migration and use backup/restore followed by a forest rename then adding the new forests to the cluster. We then use CORB to do the re-balance. A little fine tuning to optimize the number of threads and we had to adjust a linux TCP timeout to make sure the CORB process didn't fail part way through the re-balance. I think we ended up using CORB based on the very old version of ML we are currently running.
If you are lucky to be able to run under ML7 then this is all a lot easier along with much reduced forest storage needs.
As wst indicates, Marklogic 7 will do that automatically for you by default for new databases. For databases that you upgrade from earlier versions, you need to enable rebalancing manualy from Admin interface. You can find that setting on the Database Configure tab, near the bottom.
After that, you just add new forests as needed to your database, and redistribution is automatically triggered after a slight delay (based on a throttle-level like reindexer), also accross a cluster. You can follow rebalancing from the Database Status page in the Admin interface. May take a while though, it is designed to run with low interference on background.
The other way around is almost as easy. Go to Forests page under Database, and select 'retired' next to the forest you want to remove. This automatically triggers rebalancing documents away from that forest. Once that is done, you just detach it from the Database.
All data is fully searchable and accessible during all this, though response times can be relatively slow, as caches need to be refreshed as well.
HTH
With ML6 or earlier I would use back and restore to move the forests, then https://github.com/mblakele/task-rebalancer to rebalance. Afterwards you'll probably want to force a merge, to get rid of the deleted fragments in the original forests.

Fossil SCM: Merge leafs back to trunk

I have been working for some time with Fossil SCM but I still see something I don't quite get.
In the screenshot you can see that I have two Leaves that are present in the repository, but sadly I can't find the way to merge them back into trunk (is annoying to have the 'Leaf' mark in all my commits).
I had Leaves before and I normally merged them by doing
fossil update trunk
fossil merge <merged_changeset_id>
But now I just get the message:
fossil: cannot find a common ancestor between the current checkout and ...
Update: This repository is a complete import from a git repository, I'm gonna try to reproduce the exception.
ravenspoint is right---using --baseline BASELINE,
especially using the initial empty commit
of the branch you are trying to merge into
will link your independent branches into a single graph.
You can also hide the leaves you do not want to see from the timeline through the web ui, or mark them as closed.
Updated, 2017-01-12: this approach stopped working for me at some point.
I get "lack both primary and secondary files" errors when I try it now. I suspect this is dependent on the schema, possibly the changes associated with Fossil 1.34
Have you tried:
--baseline BASELINE Use BASELINE as the "pivot" of the merge instead
of the nearest common ancestor. This allows
a sequence of changes in a branch to be merged
without having to merge the entire branch.

Plone 4 smarter generic setup updates needed

The core of the problem, is that GenericSetup profiles are additive. Sometimes products can't be upgraded by simply applying profiles and the whole chain of profiles needs to be applied in a certain order.
For example, say we have a lot of independent site policies (only one of them applied
to the plone instance): S1, S2, ...., SN. And a couple of products, say A and B. For example, all those S1-SN, A, and B have metadata dependencies go like this: Sn -> A -> B
Suppose they deal with registry.xml and override something along the way. (The same can be the case from typeinfo and some other profile steps). If something changes in the product A, which may or may not be overriden in S1, we can't just perform an upgrade step for A, because when we press the button on the S1 site, its own policy overrides will be lost. However, it's a big deal to write upgrade steps for all of S1-SN just because a change in the A.
Is there any smart way to do upgrades at least for the case depicted above, when the whole problem would be solved by applying registry update in certain order, namely: B, A, Sn. (even then, there could be some hard cases)?
Since the package A has no idea of S1 (or S2 or whatever site policy happens to be), one solution is to make some "superpackage" which could have explicit knowledge about those chains of upgrades. But are there other solutions apart from always putting the resulting profile to the policy?
(For simplicity, lets forget that some changes could be done through the web)
This is actually already in GS, albeit it is not something all products use unfortunately.
The solution lies in using GS's upgradeStep.
Say you change the profile's metadata from version 1 to 2. In your zcml you register an upgrade step like this:
<genericsetup:upgradeStep
title="Upgrade myproduct from revision 1 to 2"
description=""
source="1"
destination="2"
handler="myproduct.one_to_two.upgrade"
sortkey="1"
profile="myproduct:default"
/>
In one_to_two.py you would then have
def upgrade(context):
registry = getUtility(IRegistry)
if 'foo' in registry:
do_stuff()
Inside there you can also rerun specific import steps, for instance:
context.runImportStepFromProfile('profile-myproduct:default','actions')
When the product is upgraded, only what is specified in the upgrade step handler will be applied, which is exactly what you need.
Documentation on all that here.
Decided to go with custom solution, with helper functions to make common upgrade task easier. Doesn't solve all the problems, but helps deployment.

Resources