When I am doing cp -R at my system
cp -R \[Bark\]\ Java/ /Volumes/Seagate\ Backup\ Plus\ Drive/
its throwing this message
cp: /Volumes/Seagate Backup Plus Drive/Advanced: Read-only file system
cp: [Bark] Java//Advanced:unable to copy extended attributes to /Volumes/Seagate Backup Plus Drive/Advanced: Read-only file system.
I guess I need to make the backup disk in write mode
How to change the mode of the backup disk.
Mounting on many UNIX systems is controlled by the etc/fstab file (see here). This usually specifies mount options for each device.
You need to change the mount options for the device you're interested in.
If you examine the /etc/fstab file on your system, you should see something along the lines of ro in the options column.
If you change that to rw and then remount it, it should be writeable.
You may want to change it back when you're done so as to protect the information on it.
For Mac OSX, fstab is still used but the file systems may also be under the control of automount - look up the man pages for automount and autofs.conf for information on how to configure those.
Related
Is it possible to have rsync copy "unsafe" symlinks (that is, those that refer to files/dirs outside of the copied tree, see docs here) but not update the times on them?
I'm using rsync -a --delete --omit-dir-times to copy a bunch of files from /home/somebody/foo/bar to a destination machine, but running into the following error: rsync: failed to set times on "/home/somebody/foo/bar/symlink": Operation not permitted (1), where /home/somebody/foo/bar/smylink refers to something in /usr/lib/ owned by root at the destination and lacking proper permission for the rsync user to update it.
Essentially rsync tries to update the time on the symlink like all other files it copies, but gets blocked by permissions because it's not root at the destination.
What I'd like to do is copy the link, but not touch the symlink target at all during the copy. I just want the link. I could change permissions on the target file, but I'd like to avoid that.
Is this achievable? Is this a terrible idea and I'd be abusing rsync? Suggestions for alternative approaches in the latter case?
There is another option for rsync --omit-link-times which will probably do what you are looking for. See man page at:
http://manpages.ubuntu.com/manpages/bionic/man1/rsync.1.html
I need to load hive partitions from staging folders. Currently we copy and delete. Can I use mv?
I am told that I can not use mv if the folders are EAR (Encryption At Rest). How to tell if a folder is EAR'ed?
I'm assuming the feature you are using for encryption at rest is HDFS transparent encryption (see cloudera 5.14 docs).
There is a command to get all the zones configured for encryption, listZones, but that command requires admin privileges. However, if you just need to check the permission of one file at a time, you should be able to run getFileEncryptionInfo without these permissions.
For example
hdfs crypto -getFileEncryptionInfo -path /path/to/my/file
As for whether you can move files, it looks like the answer to that is no. From the "Rename and Trash considerations" section of the transparent encryption documentation:
HDFS restricts file and directory renames across encryption zone boundaries. This includes renaming an encrypted file / directory into an unencrypted directory (e.g., hdfs dfs mv /zone/encryptedFile /home/bob), renaming an unencrypted file or directory into an encryption zone (e.g., hdfs dfs mv /home/bob/unEncryptedFile /zone), and renaming between two different encryption zones (e.g., hdfs dfs mv /home/alice/zone1/foo /home/alice/zone2).
and
A rename is only allowed if the source and destination paths are in the same encryption zone, or both paths are unencrypted (not in any encryption zone).
So it looks like using cp and rm is your best bet.
The following command works great for me for a single file:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
What I want to do is do it recursive (i.e. for all subdirectories / subfiles of a given path on server), merge folders and overwrite files that already exist locally, and finally downland only those files on server that are smaller than a certain value (e.g. 10 mb).
How could I do that?
Use rsync.
Your command is likely to look like this:
rsync -az --max-size=10m your_username#remotehost.edu:foobar.txt /some/local/directory
-a (archive mode - the sync is recursive, transfers ownership, attributes, symlinks among other things)
-z (compresses transfer)
--max-size (only copies files up to a certain size)
There are many more flags which may be suitable. Checkout the docs for more details - http://linux.die.net/man/1/rsync
First option: use rsync.
Second option, and it's not going to be a one liner, but can be done in three or four lines:
Create a tar archive on the remote system using ssh.
Copy the tar from remote system with scp.
Untar the archive locally.
If the creation of the archive gets a bit complicated and involves using find and/or tar with several options it is quite practical to create a script which would do that locally, upload it on the server with scp, and only then execute remotely with ssh.
I want to use rsync to synchronize two directories in both directions.
I refer to synchronization in classical sense
(not how it is meant in rsync manuals):
I want to update the directories in both directions,
depending on which of them is newer.
Can this be done by rsync (preferable in a Linux-way)?
If not, what other solutions exist?
Just run it twice, with "newer" mode (-u or --update flag) plus -t (to copy file modified time), -r (for recursive folders), and -v (for verbose output to see what it is doing):
rsync -rtuv /path/to/dir_a/* /path/to/dir_b
rsync -rtuv /path/to/dir_b/* /path/to/dir_a
This won't handle deletes, but I'm not sure there is a good solution to that problem with only periodic sync'ing.
Do you know Unison File Synchronizer?
Unison is a file-synchronization tool
for Unix and Windows. It allows two
replicas of a collection of files and
directories to be stored on different
hosts (or different disks on the same
host), modified separately, and then
brought up to date by propagating the
changes in each replica to the other. ...
Note also that it is resilient to failure:
Unison is resilient to failure. It is
careful to leave the replicas and its
own private structures in a sensible
state at all times, even in case of
abnormal termination or communication failures.
You need to run rsync twice and I recommend to run it with -au:
rsync -au /local/source/* /remote/destination
rsync -au /remote/destination/* /local/source
-a (a for archive) is a shortcut for -rlptgoD:
-r Recurse into sub directories
-l Also sync symbolic links
-p Also sync file permissions
-t Also sync file modification times
-g Also sync file groups
-o Also sync file owner
-D Also sync special (not regular/meta) files
Basically whenever you want to create an identical one-to-one copy using rsync, you should always use -a as that's what most users expect to happen when they talk about "syncing". Other answers here seem to overlook that sometimes the content of a file stays unchanged but its owner may have changed or its access permissions may have changed and in that case rsync would not sync the file which could be fatal.
But you also require -u as that tells rsync to completely leave any file/folder alone, in case it exists already at the destination and has a newer last modification date. Without -u rsync would sync regardless if a file/folder is newer or not.
Please note that this solution cannot handle deleted files. Handling deletes is not easily possible as consider the following situation: A file has been deleted at the source, now how shall rsync know if that file once existed and has been deleted (in that case it must be deleted at the destination as well) or whether it never existed at the source (in that case it must be copied from the destination). These two situations look identical to rsync thus it cannot know how to react correctly. It won't help to sync the other way round as that can lead to the same situation: A file exists at the source but not at the destination. Why? Has it never existed at the destination or has it been deleted? Both cases look identical to rsync.
Sync tools that can reliably sync deleted files usually manage a sync log about all past sync operations. If that log reveals that there once was a file and has been synced but now it is missing, it's clear that it has been deleted. If there never was such a file according to the log, it must be synced. By storing all log entries with timestamps, it's even possible that a deleted file comes back and gets deleted multiple times yet the sync tool will always know what to do and the result is always correct. rsync has no such log, it only relies on the current file state of two sides of the operation.
You can however build yourself a sync command using rsync and a bit POSIX shell scripting which gets already very close to a sync tool as described above. As I needed such a tool myself, here is an answer on Stackoverflow that guides you through the creation of such a script.
Thanks jsight
rsync -urv --progress dir_a dir_b && rsync -urv --progress dir_b dir_a
This would result in the second sync happening immediately after 1st sync is over. In case the directory structure is huge, this will save time, as one does not need to sit before the pc. If the structure is huge, remove the verbose and progress stuff
rsync -ur dir_a dir_b && rsync -ur dir_b dir_a
Use rsync <OPTIONS> [hostname:]source-dir [hostname:]dest-dir
for example:
rsync -pogtEtvr --progress --bwlimit=2000 xxx-files different-stuff
Will sync xxx-files to different-stuff/xxx-files .If different-stuff/xxx-files did not exist, it will create it - i.e. copy it.
-pogtEtv - just bunch of options to preserve file metadata, plus v - verbose and r - recursive
--progress - show progress of syncing in real time - super useful if you copy big files
--bwlimit=2000 - sets maximum speed of copying/syncing (bw = bandwidth)
P.S. rsync is critically important when you work over network in case of local machine you can use commands like cp.
Good Luck!
What you need is Rclone. Rclone ("rsync for cloud storage") is a command line Linux program to sync files and directories to and from different cloud storage providers (box,dropbox,ftp etc) and local filesystems. Rlone supports mirror syncing only.
Another more graphical solution which includes real-time syncing would be to use FreeFileSync, which includes the program RealTimeSync. FreefileSync support 2-way bidirectional syncing which includes handling deletes.
I was having the same question and end up using git. It might not fit your situation, but if anyone find this topic and have the same question, you may consider a version control system.
I'm using rsync with inotifywait.
When you change any file, rsync will be executed.
inotifywait -m --exclude "$_LOG_FILE" -r -e create,delete,delete_self,modify,moved_to --format "%w%f" "$folder"
You need run inotifywait on both host. Please check example inotifywait
I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.