mariabackup, can i update full back up - mariadb

I have taken a full backup. Next week I take an incremental backup. Can I use the prepare statement to sync the full backup with the incremental backup? So, next time I can use the full backup as base dir for incremental backup. Hence avoiding a new incremental folder every week.
Thanks!

Yes. You can use the incremental backup to prepare the full backup. This will then bring the full backup in sync with the to_lsn line in xtrabackup_checkpoints file located in your incremental backup directory. This line should have the same value as the one in your full backup directory after --prepare command. Once you confirm you can then delete the contents of that incremental directory for reuse.

Related

Alfresco conent store deletion

I have conentstore configured in below content store location.
D:\alfresco-content-services\alf_data\contentstore**2019**
I want to delete above shown 2019 folder under conentstore. I dont need 2019 anymore. Basically purging .
If i delete files above folder ,will it clean-up the metadata and indexes also ?
or will it corrupt my respository ? Whats the best way to achive mass deletion , which will delete references in database also without corrupting repo ?
Thanks & Regards
Brijesh
If you delete any folder from the content store, it will not affect the database (and hence the indexes) in any way. You will end up with nodes referencing .bin files that do not exist anymore, though.
Note, if that folder is the first year in your content store, then it also contains some files used by Alfresco to determine if the content store matches the database. Depending on this, if you delete the folder - you will mess up Alfresco (repository will not start if it does not find those files).
Mass deletion in general is tricky, I'd suggest using Bulk Import Tool's delete web script that does this as fast as possible (avoids audit logs, recycle bin, etc).

Recoverable file deletion in R

According to these questions:
Automatically Delete Files/Folders
how to delete a file with R?
the two ways to delete files in R are file.remove and unlink. These are both permanent and non-recoverable.
Is there an alternative method to delete files so they end up in the trash / recycle bin?
I wouldn't know about a solution that is fully compatible with Windows' "recycle bin", but if you're looking for something that doesn't quite delete files, but prevents them from being stored indefinitely, a possible solution would be to move files to the temporary folder for the current session.
The command tempdir() will give the location of the temporary folder, and you can just move files there - to move files, use file.rename().
They will remain available for as long as the current session is running, and will automatically be deleted afterwards . This is less persistent than the classic recycle bin, but if that's what you're looking for, you probably just want to move files to a different folder and delete it completely when you're done.
For a slightly more consistent syntax, you can use the fs package (https://github.com/r-lib/fs), and its fs::path_temp() and fs::file_move().

Automatic backup/restore at start server

I would like an automatic backup of a schema to a file on every load and at a restart of icCube an automatic restore of the last backup. And of course an automatic cleanup of those files. This way we would have a lot less downtime on a restart.
It looks like icCube has that with backup and/or offline data, but I can't get that working like I described above. Is what I want possible and how?
You can activate the backup in the schema file (Advanced Properties).
Now everytime you load the schema it will create a backup.
And if you set "Load On Startup" as well, icCube is going to load the last backup available.
There is no automatic cleanup: for that purpose you can use the Rest API available in the latest icCube. Otherwise, you can cleanup yourself the backup files created in the ~/icCube-data/backup folder.
Hope that helps.

Rsync Time Machine Style Backup issues

I bought an external USB3 drive to backup a WD MyCloud NAS (plugs directly into a USB3 on the NAS), and started searching for an rsync script to simulate a Time Machine style backup.
I found one that I like, but it's not working the way I expected it to.
Therefore I'm hoping you could shed some light on the matter and suggest what could/should be done to, first of all, make it work and second, suggest how this should be done to get a result similar to a Time Machine style snapshot backup.
Where I found the script I started with:
https://bipedu.wordpress.com/2014/02/25/easy-backup-system-with-rsync-like-time-machine/
He breaks down the process like this:
So here I make first a “date” variable that will be used in the name
of the backup folder to easily know when that backup/snapshot was
made.
Then use the rsync with some parameters (see man rsync for more
details):
-a = archive mode ( to send only changed parts)
-P = to give a progress info – (optional)
–delete = to delete the deleted files from backup in case they are
removed from source
–log-file = to save the log into a file (optional)
–exclude = to exclude some folders/files from backup . This are
relative to source path !!! do not use absolute path here !
–link-dest = link to the latest backup snapshot
/mnt/HD/HD_a2/ = source path
/mnt/USB/USB2_c2/MyCloud/Backups/back-$date = destination folder , it
will contain all the content from the source.
Then by using rm I remove the old link to the old backup ( the
“current” link) and then I replace it with a new soft link to the
newly created snapshot.
So now whenever I click on “current” I go in fact to the latest backup
. And because every time I make the backup the date is different the
old snapshots will be kept. So for every day I will have a snapshot.
Here is my script version based on his outline.
#!/bin/bash
date=`date “+%Y%m%d-%H-%M”`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude="lost+found" --exclude="Anti-Virus Essentials" --exclude=Nas_Prog --exclude=SmartWare --exclude=plex_conf --exclude=Backup --exclude=TimeMachineBackup --exclude=groupings.db --link-dest=/mnt/USB/USB2_c2/MyCloud/Backups/Current /mnt/HD/HD_a2/ /mnt/USB/USB2_c2/MyCloud/Backups/back-$date
rm -f /mnt/USB/USB2_c2/MyCloud/Backups/Current
ln -s /mnt/USB/USB2_c2/MyCloud/Backups/back-$date /mnt/USB/USB2_c2/MyCloud/Backups/Current
So if I am understanding his idea, first initial backup lives here. /mnt/USB/USB2_c2/MyCloud/Backups/Current.
Then on subsequent backups, the script creates a new directory in the /mnt/USB/USB2_c2/MyCloud/Backups/Current/ named ‘back-2015-12-20T09-19’ or whatever date the backup took place.
This is where I am getting a bit lost on whats actually happening.
It writes a time stamped folder to the /Backups/Current/ directory, and ALSO to the /Backups/ directory. So I have 2 versions of those time stamped folders now in two different directories.
Im confused as to where the actual most complete set of recent backup files resides now.
What I THOUGHT was going to happen is that the script would run, and any file that wasn’t changed, it would create a link from the ‘Current’ folder to the time stamped folder.
Im sure I have something wrong here, and hoping someone can point out the error and/or suggest a better method.

RMAN duplicate Target

I would like to know if RMAN DUPLICATE DATABASE command restore a database on another host to the most recent time of source?
Or while Duplicating it freezes source database datafile headers and then start duplicating and if I would be missing few of the transactions/DML taking place on source database ??
If no optional parameters are used, it would restore all the data that is archived by the RMAN archivelogger. So, not only the data which is INSIDE a redo that is archived on disk, but in addition a redo that is backed up by the RMAN backup command itself.
Consider 3 stage : active redo, archived redo (on disk), backed-up (archived) redo (on disk or tape or whatever channel)

Resources