Rsync Time Machine Style Backup issues - rsync

I bought an external USB3 drive to backup a WD MyCloud NAS (plugs directly into a USB3 on the NAS), and started searching for an rsync script to simulate a Time Machine style backup.
I found one that I like, but it's not working the way I expected it to.
Therefore I'm hoping you could shed some light on the matter and suggest what could/should be done to, first of all, make it work and second, suggest how this should be done to get a result similar to a Time Machine style snapshot backup.
Where I found the script I started with:
https://bipedu.wordpress.com/2014/02/25/easy-backup-system-with-rsync-like-time-machine/
He breaks down the process like this:
So here I make first a “date” variable that will be used in the name
of the backup folder to easily know when that backup/snapshot was
made.
Then use the rsync with some parameters (see man rsync for more
details):
-a = archive mode ( to send only changed parts)
-P = to give a progress info – (optional)
–delete = to delete the deleted files from backup in case they are
removed from source
–log-file = to save the log into a file (optional)
–exclude = to exclude some folders/files from backup . This are
relative to source path !!! do not use absolute path here !
–link-dest = link to the latest backup snapshot
/mnt/HD/HD_a2/ = source path
/mnt/USB/USB2_c2/MyCloud/Backups/back-$date = destination folder , it
will contain all the content from the source.
Then by using rm I remove the old link to the old backup ( the
“current” link) and then I replace it with a new soft link to the
newly created snapshot.
So now whenever I click on “current” I go in fact to the latest backup
. And because every time I make the backup the date is different the
old snapshots will be kept. So for every day I will have a snapshot.
Here is my script version based on his outline.
#!/bin/bash
date=`date “+%Y%m%d-%H-%M”`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude="lost+found" --exclude="Anti-Virus Essentials" --exclude=Nas_Prog --exclude=SmartWare --exclude=plex_conf --exclude=Backup --exclude=TimeMachineBackup --exclude=groupings.db --link-dest=/mnt/USB/USB2_c2/MyCloud/Backups/Current /mnt/HD/HD_a2/ /mnt/USB/USB2_c2/MyCloud/Backups/back-$date
rm -f /mnt/USB/USB2_c2/MyCloud/Backups/Current
ln -s /mnt/USB/USB2_c2/MyCloud/Backups/back-$date /mnt/USB/USB2_c2/MyCloud/Backups/Current
So if I am understanding his idea, first initial backup lives here. /mnt/USB/USB2_c2/MyCloud/Backups/Current.
Then on subsequent backups, the script creates a new directory in the /mnt/USB/USB2_c2/MyCloud/Backups/Current/ named ‘back-2015-12-20T09-19’ or whatever date the backup took place.
This is where I am getting a bit lost on whats actually happening.
It writes a time stamped folder to the /Backups/Current/ directory, and ALSO to the /Backups/ directory. So I have 2 versions of those time stamped folders now in two different directories.
Im confused as to where the actual most complete set of recent backup files resides now.
What I THOUGHT was going to happen is that the script would run, and any file that wasn’t changed, it would create a link from the ‘Current’ folder to the time stamped folder.
Im sure I have something wrong here, and hoping someone can point out the error and/or suggest a better method.

Related

Rsync - How to display only changed files

When my colleague and I upload a PHP web project to production, we use rsync for the file transfer with these arguments:
rsync -rltz --progress --stats --delete --perms --chmod=u=rwX,g=rwX,o=rX
When this runs, we see a long list of files that were changed.
Running this 2 times in a row, will always show the files that were changed between the 2 transfers.
However, when my colleague runs the same command after I did it, he will see a very long list of all files being changed (though the contents are identical) and this is extremely fast.
If he uploads again, then again there will be only minimal output.
So it seams to me that we get the correct output, only showing changes, but if someone else uploads from another computer, rsync will regard everything as changed.
I believe this may have something to do with the file permissions or times, but would like to know how to best solve this.
The idea is that we only see the changes, regardless who does the upload and in what order.
The huge file list is quite scary to see in a huge project, so we have no idea what was actually changed.
PS: We both deploy using the same user#server as target.
The t in your command says to copy the timestamps of the files, so if they don't match you'll see them get updated. If you think the timestamps on your two machines should match then the problem is something else.
The easiest way to ensure that the timestamps match would be to rsync them down from the server before making your edits.
Incidentally, having two people use rsync to update a production server seems error prone and fragile. You should consider putting your files in Git and pushing them to the server that way (you'd need a server-side hook to update the working copy for the web server to use).

Fossil: "not a valid repository" - deleted repository

I'm trying fossil for the first time, and messed it up within minutes. I created a repository, then apparently ran commands in the wrong folders etc., eventually deleted the test repository, in order to restart. (Somewhere I had read that fossil was "self contained", so I thought, deleting a repository file would be ok. What is the correct way to delete a fossil repository?)
Now, with almost every command I try (incl. "all rebuild"), I get the error "not a valid repository" with the deleted repository name.
What now?
According to this post:
The "not a valid repository" error only arises
when Fossil tries to measure the size of the repository file and sees that
either the file does not exist or else that the size of the file is less
than 1024 bytes. It does this by calling stat() on the file and looking at
the stat.st_size field.
it seems likely that you have a missing or truncated Fossil file. Make sure you've actually deleted the repository file, and that your filesystem has actually released the file handles. Fossil stores some respository information in ~/.fossil, so you may need to remove that too.
rm ~/.fossil
In egregious cases, you may want reboot after deleting this file, just to be sure you're working with a clean slate.
If you're still having problems, try creating a new repository file in a different directory. For example:
cd /tmp
fossil init foo.fsl
fossil open foo.fsl
fossil close
If all that goes smoothly, you'll have to hunt down whatever remnants of the repository are lurking. As long as the file handles are closed, there's no reason you shouldn't be able to delete foo.fsl (or whatever) and call it good.
I have just experienced the exact same problem on Windows. I too seem to have found a solution. Here is what I did. I cannot guarantee that it is a universal solution or even a good one. In:
C:\Users\mywindowsusername\AppData\Local
There was a file named _fossil and a directory/folder named VirtualStore. I deleted both. This seems to have removed all traces of the repository. Note that the repository was still in the "open" state, as with your case.
Edit: After experimenting further, it would appear that VirtualStore is a temporary directory that will disappear after committing (a .fossil file will then appear inside the targeted directory).
My mistake was to create a repository at the root and clone: fossil proceeded to clone the entire C drive. Probably a common newbie mistake.

cleartool update error in Solaris Unix

I am working on a view created from the main code repository on a Solaris server. I have modified a part of the code on my view and now I wish to update the code in my view to have the latest code from the repository. However when I do
cleartool update .
from the current directory to update all the files in the current directory, some(not all) of the files do not get updated and the message I get is
Keeping hijacked object <filePath> - base no longer known.
I am very sure that I have not modified the directory structure in my view nor has it been modified on the server repository. One hack that I discovered is to move the files that could not be updated to a different filename(essentially meaning that files with original filename no longer exist on my view) and then run the update command. But I do not want to work this out one by one for all the files. This also means I will have to perform the merge myself.
Has someone encountered this problem before? Any advice will be highly appreciated.
Thanks in advance.
You should try a "cleartool update -overwrite" (see cleartool update), as it should force the update of all files, hijacked or not.
But this message, according to the IBM technote swg1PK94061, is the result of:
When you rename a directory in a snapshot view, updating the view will cause files in the to become hijacked.
Problem conclusion
Closing this APAR as No Plans To Fix (NPTF) because:
(a) to the simple workaround of deleting the local copy of renamed directories which will mitigate the snapshot view update problem and
(b) because of this issue's low relative priority with higher impact defects
So simply delete (or move) the directory you have rename, relaunch you update, and said directory (and its updated content) will be restored.
Thanks for your comment VonC. I did check out the link you mentioned, but I did not find it much useful as I had not renamed any directory. After spending the whole day yesterday, I figured out that I had modified some of the files previously without checking them out first. This made me to modify them forecfully as they were in read-only mode as they were not checked-out. This resulted in those files to become hijacked, and hence when I tried to update my view to look at all the modifications in the repository, it was unable to merge my modified file with that on the server as those files were modified without being checked out so the cleartool update was made to believe that the file is not modified(since it was not checked out) but actually it was. That was the fuss!! :)

How to get jenkins to copy artifacts to a dynamic directory?

I'm trying to get Jenkins to copy the artifacts of a build to an archive directory on another server using the scp plugin.
Ideally, I'd like to be able to have the destination be dynamic based on the build version so the result would like something like /builds/<build version>/
For a build version like 1.2.3.4 it would look like:
/builds/1.2.3.4/
From reading the scp plugin page, it doesn't look like this is possible but I figured someone here may have figured it out.
Is there a way to do this?
Is it better to just put the artifacts with the version number embedded in the file name in one directory?
Like you said, I don't think the scp plugin can do it directly. However, there may be a workaround.
In your build, you have access to the build number using $BUILD_NUMBER (or %BUILD_NUMBER%, as the case may be -> Linux vs Windows).
In any case, as part of your script, you could create a directory with $BUILD_NUMBER as the name, so:
mkdir -p $BUILD_NUMBER
-or-
md %BUILD_NUMBER%
So, for example, a new directory would be /path/to/workspace/1.2.3.4
Once your build is complete, at the end of your script, create the above directory, move your artifact into it, and tar/zip the directory up.
Use this tar/zip file as your job's artifact.
Use the scp plugin to transfer this artifact to your destination machine, and untar/unzip it there (let's say at /path/to/artifact/directory)
What you will have then, is /path/to/artifact/directory/1.2.3.4.
For the next build, let's say 1.2.3.5, you will create a new directory (named 1.2.3.5), move your artifact into it at the end of the build, zip it up and transfer it.
When you unzip it at your destination, you'll have a new directory /path/to/artifact/directory/1.2.3.5 with the new build's artifact in it.
I know it sounds confusing, but is actually quite easy to implement.

Unix invoke script when file is moved

I have tons of files dumped into a few different folders. I've tried organizing them several times, unfortunatly, there is no organization structure that consistently makes sense for all of them.
I finally decided to write myself an application that I can add tags to files with, then the organization can be custom to the actual organizational structure.
I want to prevent from getting orphaned data. If I move/rename a file, my tag application should be told about it so it can update the name in the database. I don't want it tagging files that no longer exist, and having to readd tags for files that used to exist.
Is there a way I can write a callback that will hook into the mv command so that if I rename or move my files, they will invoke the script, which will notify my app, which can update its database?
My app is written in Ruby, but I am willing to play with C if necessary.
If you use Linux you can use inotify (manpage) to monitor directories for file events. It seems there is a ruby interface for inotify.
From the Wikipedia:
Some of the events that can be monitored for are:
IN_ACCESS - read of the file
IN_MODIFY - last modification
IN_ATTRIB - attributes of file change
IN_OPEN and IN_CLOSE - open or close of file
IN_MOVED_FROM and IN_MOVED_TO - when the file is moved or renamed
IN_DELETE - a file/directory deleted
IN_CREATE - a file in a watched directory is created
IN_DELETE_SELF - file monitored is deleted
This does not work for Windows (and I think also not for other Unices besides Linux) as inotify does not exist there.
Can you control the path of your users? Place a script or exe and have the path point to it before the standard mv command. Have this script do what you require and then call the standard mv to perform the move.
Alternately an alias in each users profile. Have the alias call your replacement mv command.
Or rename the existing mv command and place a replacement in the same dir, call it mv and have it call your newly renamed mv command after doing what you want.

Resources