I have been reading the rsync documentation for a few hours, but I can't figure out how to convey to rsync how to only rename (and not re-upload folder and it's content) destination folders when they are renamed at the source.
I'm connecting to the destination with SSH, and the local folder is the source -- and the remote server is the destination. If I rename a folder containing files, rsync automatically re-uploads all the content of the source folder. I'm not using the rsync's server part, maybe it will works if were to do that ?
I have encountered the same behavior with lftp, and this tool doesn't seem's to have these options. Even if it is based on the file's date rule, files inside the renamed folder are removed/re-uploaded.
Thanks in advance if someone knows how to manage this :)
I've been looking for something similar.
so far, the best solution I have found is at:
http://serenadetoacuckooo.blogspot.com/2009/07/rsync-and-directory-renaming.html
It basically mentions including a meta-file in each folder that indicates the folder's name.
Essentially, you would want to check that file with the directory name, and rsync only if they are the same (otherwise, issue a remote rename command.)
It depends on the scope of what you're using rsync for, but I hope that this information can help you.
How would rsync or any other program know what constitutes renamed? What if two directories are very similar candidates and somehow rsync guesses maybe either one could be a rename of what went before? It's not possible. I think you're stuck with uploading everything again.
You know about the --delete option, right:
--delete delete files that don't exist on the sending side
Note also the --force option:
--force force deletion of directories even if not empty
Related
When my colleague and I upload a PHP web project to production, we use rsync for the file transfer with these arguments:
rsync -rltz --progress --stats --delete --perms --chmod=u=rwX,g=rwX,o=rX
When this runs, we see a long list of files that were changed.
Running this 2 times in a row, will always show the files that were changed between the 2 transfers.
However, when my colleague runs the same command after I did it, he will see a very long list of all files being changed (though the contents are identical) and this is extremely fast.
If he uploads again, then again there will be only minimal output.
So it seams to me that we get the correct output, only showing changes, but if someone else uploads from another computer, rsync will regard everything as changed.
I believe this may have something to do with the file permissions or times, but would like to know how to best solve this.
The idea is that we only see the changes, regardless who does the upload and in what order.
The huge file list is quite scary to see in a huge project, so we have no idea what was actually changed.
PS: We both deploy using the same user#server as target.
The t in your command says to copy the timestamps of the files, so if they don't match you'll see them get updated. If you think the timestamps on your two machines should match then the problem is something else.
The easiest way to ensure that the timestamps match would be to rsync them down from the server before making your edits.
Incidentally, having two people use rsync to update a production server seems error prone and fragile. You should consider putting your files in Git and pushing them to the server that way (you'd need a server-side hook to update the working copy for the web server to use).
I'm a freshman, and I created a server with my roomates in order to practice in maintaining a server.
We installed CentOS7. And I would like to ask how I can install a tool for everyone to use?
More particularly, we want to install Cromwell. But since, they don't have instructions on how to install on Unix, I downloaded Linuxbrew and installed it like this.
The downside is that it's not visible to the other users connected to the servers.
I know this is a noob question, but any response would be appreciated.
A standard unix machine has programs (tools and so on) installed in predefined directories like /bin, /usr/bin, perhaps /usr/local/bin. Which to choose is another matter, probably you want /usr/bin. Also the environ variable PATH plays a role.
Into the chosen directory there should be a file representing the "tool". You can put a copy of the executable file in that directory, and set (or check) its permissions. Execution permission can be granted to all users, or only some, it depends. In other words,
/home/me/.linuxbrew/Cellar/cromwell
is not a good place for a "system" tool or app; you should copy that executable in /usr/bin, set ownership (perhaps to root?) with chown, and set the correct permissions with chmod.
You can make a hard link of your executable into the directory; this saves space, but also means that there is only one copy of the executable. Having two different copies (the "stable" one, and the other one you can fiddle with) can be handy.
After the executable is reachable and executable from the chosen users, maybe it needs some support files. To find them, it can rely on fixed locations, or some environment variable, or some configuration file. But all these things are outside of the scope of the question.
Try this command:
you#machine$ sudo chmod [who][op][permissions] filename
"who" refers to the users that have a particular permission: the user ("u"), the group ("g"), or other users ("o", also known as "world"). "op" determines whether to add ("+"), remove ("-") or explicitly set ("=") the particular permissions. "permissions" are whether the file should be readable ("r"), writable ("w"), or executable ("x"). As an example:
you#machine$ chmod o+x file
will add executable permission for others to file.
How to check if a file is being referenced by any symlinks in the directory - I want to delete all the other files except the symlink and the refernced file. Is there any direct command to check or a work around to do so?
If the symbolic link is in the same directory or in a well known one, that would be easy. Just check if no other file share the same inode ls -d1Li.
Otherwise, there is no direct way to know if a symbolic link exist to any given file. Even exploring all mounted file systems wouldn't be reliable, as the link might exist on a currently unmounted filesystem, or on a remote machine accessing the file remotely (NFS, CIFS and the likes).
I have a batchfile with SFTP instruction to download txt files (cron job), basically: get *.txt
Wondering what the best method is to delete those files after the server has downloaded them. The only problem being that the directory is constantly being updated with new files, so running rm *.txt afterwards won't work.
I've thought of a couple complex ways of doing this, but no command line based methods. So, I thought I'd shoot a query out to you guys, see if there's something I haven't thought of yet.
I suggest to make a list of all the files that were downloaded and then issue ftp delete/mdelete commands with the exact file names.
I'm working on a web application where a user uploads a list of files, which should then be immediately rsynced to a remote server. I have a list of all the local files that need to be rsynced, but they will be mixed in with other files that I do not want rsynced every time. I know rsync will only send the changed files, but this directory structure and contents will grow very large over time and the delay would not be acceptable.
I know that doing a remote rsync, I can specify a list of remote files, i.e...
rsync "host:/path/to/file1 /path/to/file2 /path/to/file3"
... but that does not work once I remove "host:" and try to specify the files locally.
I also know I can use --files-from, but that would require me to create a file ahead of time with a list of files that I want to rsync (and then delete it afterwards). I think it'd be cleaner to just effectively say "rsync these 4 specific files to this remote server", but I can't seem to get that to work.
Is there any way to do what I'm trying to accomplish, or do I have to resort to creating a tmp file with a list in it?
Thanks!
You should be able to list the files similar to the example you gave. I did this on my machine to copy 2 specific files from a directory with many other files present.
rsync test.sql test2.cpp myUser#myHost:path/to/files/synced/