Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I can't find any reliable file syncing program for my Mac, so I have been using the command line Rsync between two folders.
I have been using "rsync -r source destination".
-Does this sync files both ways, or only sync the source to the destination?
-If a file was previously synced between the two folders, but deleted because it is no longer needed, does it get deleted on both the source and destination, or will it just always get copied to where it is missing from?
No, rsync will synchronise the contents of a remote directory to a local directory. In that respect it is one-way. Optionally you can force it to delete local files that no longer exist in the remote folder.
If you want to keep the most recent changes on both machines, you would have to supply a more complicated rsync incantation and set up both machines as rsync servers. I imagine doing so will get you into trouble eventually, especially if you want to be authoritarian over deletion.
In any case, you can use the -u (or --update) option which will skip any files that are newer on the destination end. You do have to worry about the timestamps, and this will not handle any conflicts or merges. Still... It may be as simple as:
rsync -u -r target1 target2
rsync -u -r target2 target1
That won't do anything about deletion. You have no way of knowing that a missing file on one target was deleted there instead of a new file having been created on the other target.
This is why version control was invented... And for people who are scared of version control, services like DropBox exist.
Answering original question :
1)It synchronizes files only in one direction depending on pull or push mechanism .
for push and pull mechanism , see manual page by ""man rsync"".
so, for the rest of your question don't assume that it works in both
ways.
2)The file only gets deleted on destination directory.
get more details on this in rsync --help command ,see option
--delete which delete extraneous files from destination dirs
and others options for delete.
3) The missing files will be copied just to destination directory if you are pushing files on remote machine/directory/
a sample example for push mechanism :-
rsync -avz /home/local_dir/abc.txt remoteuser#192.168.xx.xx:/home/remoteuser/
if a file named abc.txt is already present in destination directory then it will be updated depending upon it is old version of abc.txt on local side or not.
And if abc.txt is not present on remote directory , a total new file named abc.txt will be created with contents of local version of abc.txt
a sample example for pull mechanism :-
rsync -avz remoteuser#192.168.xx.xx:/home/remoteuser/abc.txt /home/local_dir/
if a file named abc.txt is already present in local directory then it will be updated depending upon it is old version of abc.txt on remote side or not.
And if abc.txt is not present on local directory , a total new file named abc.txt will be created with contents of local version of abc.txt
Related
I rsync between two systems using ssh. The command line that I use is something like:
rsync -avzh -e "ssh" ./builds/2023_02_02 akamaicdn:builds/2023_02_02
The akamaicdn is accessed using ssh and the corresponding identity, host name etc are specified in ~/.ssh/config file.
Most of the times the destination dir doesn't exist. That means it is a full upload after rsync optimizations.
But the content that gets uploaded next day has lot of things similar to the ones from previous day as these are build dirs and we have lot of common content between them.
Is there any way to tell remote rsync to use set of previous dirs to scan when it is determining what parts of a file have to be uploaded?
I am open to other optimizations if you can think of.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm a little bit confused about process and open file tables.
I know that if 2 processes try to open the same file, there will be 2 entries in the open file table. I am trying to find out the reason for this.
Why there are 2 entries created in the open file table when 2 different processes try to reach the same file? Why it can't be done with 1 entry?
I'm not quite clear what you mean by "file tables". There are no common structures in the Linux kernel referred to as "file tables".
There is /etc/fstab, which stands for "filesystem table", which lists filesystems which are automatically mounted when the system is booted.
The "filetable" Stack Overflow tag that you included in this question is for SQL Server and not directly connected with Linux.
What it sounds like you are referring to when you talk about open files is links. See Hard and soft link mechanism. When a file is open in Linux, the kernel maintains what is basically another hard link to the file. That is why you can actually delete a file that is open and the system will continue running normally. Only when the application closes the file will the space on the disk actually be marked as free.
So for each inode on a filesystem (an inode is generally what we think of as a file), there are often multiple links--one for each entry in a directory, and one for each time an application opens the file.
Update: Here is a quote from the web page that inspired this question:
Each file table entry contains information about the current file. Foremost, is the status of the file, such as the file read or write status and other status information. Additionally, the file table entry maintains an offset which describes how many bytes have been read from (or written to) the file indicating where to read/write from next.
So, to directly answer the question, "Why there are 2 entries created in the open file table when 2 different processes try to reach the same file?", 2 entries are required because they may contain different information. One process may open the file read-only while the other read-write. And the file offset (position within the file) for each process will almost certainly be different.
I have a rsync client which pushes all changes to the server. Suppose I change already existing copy on the server and do a rsync from my rsync client. The client is not updating the changed copy in the server i.e. it is unable to see the change i have made in the server.
I am using rsync with the following options:
-progu
How to make the client see the changed copy and update it?
Let's use different terms. Source and Target make more sense for this. You have a server that is normally your Target. Now you've made changes to files on the server that you'd like reflected in Source.
What you're asking to do is reverse the roles of Source and Target in order to update this file.
The -u option already tells rsync "skip files that are newer on the receiver". So you may be safe if you simply run an rsync in the other direction -- from your tradition target to your traditional source. Files that are newer on your "client" won't be updated (because of -u); only the file that is newer should be updated.
Test this with -v -n options before running it "for real".
I have been reading the rsync documentation for a few hours, but I can't figure out how to convey to rsync how to only rename (and not re-upload folder and it's content) destination folders when they are renamed at the source.
I'm connecting to the destination with SSH, and the local folder is the source -- and the remote server is the destination. If I rename a folder containing files, rsync automatically re-uploads all the content of the source folder. I'm not using the rsync's server part, maybe it will works if were to do that ?
I have encountered the same behavior with lftp, and this tool doesn't seem's to have these options. Even if it is based on the file's date rule, files inside the renamed folder are removed/re-uploaded.
Thanks in advance if someone knows how to manage this :)
I've been looking for something similar.
so far, the best solution I have found is at:
http://serenadetoacuckooo.blogspot.com/2009/07/rsync-and-directory-renaming.html
It basically mentions including a meta-file in each folder that indicates the folder's name.
Essentially, you would want to check that file with the directory name, and rsync only if they are the same (otherwise, issue a remote rename command.)
It depends on the scope of what you're using rsync for, but I hope that this information can help you.
How would rsync or any other program know what constitutes renamed? What if two directories are very similar candidates and somehow rsync guesses maybe either one could be a rename of what went before? It's not possible. I think you're stuck with uploading everything again.
You know about the --delete option, right:
--delete delete files that don't exist on the sending side
Note also the --force option:
--force force deletion of directories even if not empty
I'm working on a web application where a user uploads a list of files, which should then be immediately rsynced to a remote server. I have a list of all the local files that need to be rsynced, but they will be mixed in with other files that I do not want rsynced every time. I know rsync will only send the changed files, but this directory structure and contents will grow very large over time and the delay would not be acceptable.
I know that doing a remote rsync, I can specify a list of remote files, i.e...
rsync "host:/path/to/file1 /path/to/file2 /path/to/file3"
... but that does not work once I remove "host:" and try to specify the files locally.
I also know I can use --files-from, but that would require me to create a file ahead of time with a list of files that I want to rsync (and then delete it afterwards). I think it'd be cleaner to just effectively say "rsync these 4 specific files to this remote server", but I can't seem to get that to work.
Is there any way to do what I'm trying to accomplish, or do I have to resort to creating a tmp file with a list in it?
Thanks!
You should be able to list the files similar to the example you gave. I did this on my machine to copy 2 specific files from a directory with many other files present.
rsync test.sql test2.cpp myUser#myHost:path/to/files/synced/