Is this possible in the section backupConfiguration inside icCube.xml?
As an alternative, is it possible to create some sort of Schedulertask to delete old backups? Or is there a possibility to trigger this via the XMLA-interface?
You can configure icCube to keep the last backup (see backupHistory in inCube.xml). Otherwise, you can configure a cron task to cleanup the backup directory according to your own requirements.
Related
I would like an automatic backup of a schema to a file on every load and at a restart of icCube an automatic restore of the last backup. And of course an automatic cleanup of those files. This way we would have a lot less downtime on a restart.
It looks like icCube has that with backup and/or offline data, but I can't get that working like I described above. Is what I want possible and how?
You can activate the backup in the schema file (Advanced Properties).
Now everytime you load the schema it will create a backup.
And if you set "Load On Startup" as well, icCube is going to load the last backup available.
There is no automatic cleanup: for that purpose you can use the Rest API available in the latest icCube. Otherwise, you can cleanup yourself the backup files created in the ~/icCube-data/backup folder.
Hope that helps.
I have two containers in my meteor app on galaxy servers. I have some background jobs that I want to be executing only on a single container to avoid duplication.
What's the easiest way to achieve this? Is there some procId or the like that I can retrieve during runtime?
If the two servers have their own settings files, you could use a setting to nominate one of the servers as the one that does the background jobs.
There is a package called node-cron that can be used for setting up regular jobs https://www.npmjs.com/package/node-cron
Use a Meteor.startup method to look at the settings file, and if it is the designated server, it can schedule the jobs for itself.
Another technique is to work out the ID of each server, and have database entry containing the id of the nominated server.
On checking further, https://github.com/percolatestudio/meteor-synced-cron supports multiple servers, which should do what you need :)
I was thinking about this more... one solution (which I'm kind of borrowing from meteor-migrations) is to make a simple database collection/entry that holds a startupCodeHasRun flag. Then in a Meteor.startup() block you can check if the flag has been set, if not, set the flag and run the code. This would cause the code to only be run once on only one of your containers that share the same database.
The gotcha is you would have to reset this flag manually before redeploying or else the code would never be run again on redeploy.
Not an ideal solution but could work. This is the same way the above database migrations package works in a multi-container environment. And since it's a one-time database migration, you don't have to worry about redeploy.
We use a process to migrate our ETL changes, but not the associated SQL code that often need to be applied to both the source and target DBs. Is there a way for us to specify which SQL files get applied to which DB?
Use one Flyway instance per DB and set a different location to load the sql files from for each DB. You can also specify a common one for both.
As part of my custom promotion process I want to get rid of older builds of the SNAPSHOT versions. When I trigger a build delete using the artifactory rest API using the 'artifacts=1' flag the artifacts itself are deleted successfully, but the according directories where the artifacts have been stored are still there. Is there a convenient way to delete those empty directories too during the build delete operation?
cheers,
René
The directories aren't deleted by default. You can write a user plugin to delete empty folders as afterDelete trigger. Here's an example: https://github.com/JFrogDev/artifactory-user-plugins/blob/master/cleanup/deleteEmptyDirs/deleteEmptyDirs.groovy
After reading the builtin help, it seems to me that both commads can be used for modifying the workspace to match a certain revision. But I don't understand the differences between update and checkout. Please include some trivial workflows in your answer which show when update/checkout are appropriate.
First major difference is that if you have a remote url set, update will pull first latest artifacts from the remote repository.
Another difference is that if you have uncomitted changes, checkout will not run (unless you force it), whereas update will retain your changes and reapply them. With update you can therefore integrate changes from other users before committing.
So:
Update is what you need when you collaborate on a project, in order to prevent forks.
Checkout lets you deploy a particular version.