rsync with --fake-super not preserving owner after restore - Monterey/Synology DS920+/rsync 3 - rsync

Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?

So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.

Related

Data loss under Linux

I work under linux debian buster.
This morning I worked as usual and my PC crashed. I forced it to shut down and when I restarted, it presents the terminal with initramfs (if I'm not mistaken) by inviting me to do an fsck.
This is not the first time this has happened to me. I usually do an fsck -y / dev / sda1 then fsck -y / dev / sda3 for my root and home partition.
But this morning, after crashing, when I did that, he scrolled through several messages quickly, and that worried me. At the end I restarted my PC and voila, I can no longer find my work folder. In fact, I have a folder containing two other folders. Hey there is only one visible folder left. All of my shortcuts to the missing folder no longer works.
When I make a df -h, the size appears as if the file is present, but impossible to see it. It is not in / lost + found
I have a global search in my home, and nothing
I can no longer work, all my work was there, I have a 1 month old backup, but good.
If really really, you have a solution, please I'm desperate.
My disk is partitioned into 4 including 3 for linux and one for ntfs
Thank you
too happy
I found my data.
What made me have 0.5% of hope (I must admit I was at the edge of the window with the pc and looking for my data one last time), was the size of my partition. When I right click on home and I look at the size I have 31go, and with a
df -h
and as a result:
/ dev / sda3 192G 95G 87G 53% / home
or 95 GB of used, compared to 31 GB above, so where are the 60 GB?
Before the problem, I was around 95 GB in size.
It is true that several messages including the word inode or node (I do not remember) with numbers quickly appeared during fsck -y
Someone suggested I take a look at / home / lost + found, and when I did, I saw nothing. But when I logged in as root in a terminal, then "cd / home" and ls lost + found ", I saw numbers like # 13032 # 13036 # 1181667, and a folder with the number # 4703. So I made a "chmod 777 -R lost + found" in order to be able to access it via my account (simple user account). Once the command was executed and after a few minutes, I opened / home / lost + found via a "nemo" file explorer and TADAM, all my data was there.
I’ve done SEVERAL SAVINGS and vowed not to trust fsck -y anymore, even though it’s a great tool, but I’ll use it with caution.

Repository packages-microsoft-com-prod is listed more than once in the configuration

Whenever I run any yum command I am getting the below error -
Repository packages-microsoft-com-prod is listed more than once in the configuration
Any ideas to resolve the issue ?
Repository packages-microsoft-com-prod is listed more than once in the configuration
HDP-2.6 | 2.9 kB 00:00:00
HDP-UTILS-1.1.0.21 | 2.9 kB 00:00:00
Updates-ambari-2.5.2.0 | 2.9 kB 00:00:00
https://packages.microsoft.com/rhel/7/mssql-server/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's certificate issuer has been marked as not trusted by the user."
Trying other mirror.
In the folder /ect/yum.repos.d you have two or more files.repo with the same repository name [packages-microsoft-com-prod]. I had the same problem and I have to delete one of the files, which was irrelevant to my os. And then I understood that that was a not good idea.
Find the packages that are related to another, in this case, files that relate to Microsoft and open them in your favorite editor. The file is probably named different, but the contents will be the same.
[packages-microsoft-com-prod]
name=packages-microsoft-com-prod
baseurl=https://packages.microsoft.com/rhel/7/prod/
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
If this is the case, then it is safe to delete one of them. But if they are different, I wouldn't touch them. You could probably rename one of them, but I'm not sure if that would break something important.
First of all, the specific repository is not the problem, meaning that normally with third party repos the general message should look like this:
Repository XXX is listed more than once in the configuration
If that happens, the problem solves just by deleting the files related to that repository in the location etc/yum.repos.d/
You are able to delete such files by typing:
sudo rm -rf XXX.repos
in the terminal, in that location. You also got to type:
yum clean all
or
dnf clean all
depending on which command is triggering the problem.
Also, the repository or the app is not usually the problem. This is due to a bug according to the official site of RedHat.
Finally if you want to uninstall the app, you gotta run the same command to delete files on the locations:
var/cache/dnf
var/cache/log
Example:
sudo rm -rf XXX
rm is the command to remove files using the terminal, while rmdir is for deleting directories.
by using rf you are forcing the deletion, even when the files chosen are protected or the directories filled with protected and/or unprotected files, for which you might want to be careful with such command.
NOTE: Just for THIRD-PARTY repos.

Preserve files/directories for rpm upgrade in .spec file(rpmbuild)

I wrote a .spec file on RHEL and I am building RPM using rpmbuild. I need ideas on how to handle the situation below.
My RPM creates an empty logs directory when it installs first time within the installation folder like below
/opt/MyInstallation-1.0.0-1/some executables
/opt/MyInstallation-1.0.0-1/lib/carries shared objects(.so files)
/opt/MyInstallation-1.0.0-1/config/carries some XML and custom configuration files(.xml, etc)
/opt/MyInstallation-1.0.0-1/log--->This is where application writes logs
When my RPM upgrades MyInstallation-1.0.0-1, to MyInstallation-1.0.0-2 for example, I get everything right as I wanted.
But, my question is how to preserve log files written in MyInstallation-1.0.0-1? Or to precisely copy the log directory to MyInstallation-1.0.0-2.
I believe if you tag the directory as %config, it is expected that the user will have files in there, so it will leave it alone.
I found a solution or workaround to this by hit and trial method :)
I am using rpmbuild version 4.8.0 on RHEL 6.3 x86_64. I believe it will work on other distros as well.
If you install with one name only like "MyInstallation" rather than "MyInstallation-version number-RPM Build Number" and create "logs directory as a standard directory(no additional flags on it)[See Original Question for scenario] Whenever you upgrade, you normally don't touch logs directory. RPM will leave its contents as it is. All you have to do is to ensure that you keep the line below in the install section.
%install
install --directory $RPM_BUILD_ROOT%{_prefix}/%{name}/log
Here, prefix and name are macros. That has to do nothing with underlying concept.
Regarding config files, the following is a very precise table that will help you guarding your config files. Again, this rule can't be applied on logs our applications create.
http://www-uxsup.csx.cam.ac.uk/~jw35/docs/rpm_config.html
Thanks & Regards.

Rsync freezing mid transfer with no warning

Intermittently, I'll run into an issue where my rsync script will simply freeze mid transfer. This freeze may occur while downloading a file, or amidst listing uptodate files.
I'm running this on my mac, here's the code below:
rsync -vvhrtplHP -e "ssh" --rsync-path="sudo rsync" --filter=". $FILTER" --delete --delete-excluded --log-file="$BACKUP/log" --link-dest="$BACKUP/current/" $CONNECT:$BASE $BACKUP/$DATE/
For example, the console will output the download progress of a file, and stop at an arbitrary percentage and speed. The log doesn't even list the file (probably because it's incomplete).
I'll try numerous attempts and it'll freeze on different files or steps with no rhyme or reason. Terminal will show the loading icon while it's working, the output will freeze, and after a few seconds the loading icon vanishes.
Any ideas what could be causing this? I'm using rsync 3.1.0 on Mavericks. Could it be a connectivity issue or a system max execution time issue?
I have had rsync freezes in the past and I recall reading somewhere that it may have to do with rsync having to look for files to link, something increasingly difficult as you accumulate backup over backup. I suggest you skip the --link-dest in the next backup if your disk space allows it (to break the chain, so to speak).
As mentioned in https://serverfault.com/a/207693 you could use the hardlink command afterwards, I haven't tried it yet.
Just had a similar problem while doing rsync from harddisk to a fat32 usb. rsync froze already in less than a second in my case and did not react at all after that.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having fat32 filesystem on the usb drive, which does not support hardlinks.
Formatting the usb drive with ext4 solved the problem for me.

Minimum permissions needed to allow PHP PDO / Apache access to SQLite database in /var/www

SOLVED: See my answer below
I'm experiencing the same issue that Austin Hyde experienced in this question. I have an SQLite database that I can read, but not write.
Specifically, I'm getting General error: 8 attempt to write a readonly database in /var/www/html/green/database.php on line 34
My issue diverges from his as follows:
-As recommended in the answers to his question, I've made the database world-writeable, as well as the folder in which the database resides, with no luck. I've also set the owner of the database to "apache" as well as "nobody", without success.
-I've set the entire path set 777, beginning at /var (which I hate to do), no joy.
-I've messed about with SELinux (I'm running Fedora 12) to let httpd do whatever it wants; nothing.
I feel that I'm almost certainly missing something simple here, but I'm out of ideas.
What permissions need to be on an SQLite file in order to allow PHP / Apache to read and write to it via PDO?
Edit: Another related question, adding weight to the hypothesis that I've got a write permissions conflict somewhere.
For those who can not afford to disable SELinux entirely, here is the way to go.
To make a directory (say rw_data) and all it's content writable to any process running in httpd_t domain type ie. web-server processes, use following command as root.
chcon -R -t httpd_sys_content_rw_t "/var/www/html/mysite/rw_data/"
you can check SELinux context labels with following command :
ls -Z /var/www/html/mysite | grep httpd_sys_content_rw_t
This works on Fedora 16, should work on other SELinux enabled distros too.

Resources