Sync the directories - rsync

I have a directory on the computer having many files and sometimes I need to sync it with a clone from a device.
I use rsync.
Rsync does not see when a file was renamed or moved in other directory and it is very slow sometimes.
Is there a smart syncing tool that can see when a file was renamed/moved in the main directory and just rename on the clone without physical remove+copy ?

Related

How can I sync only the files I have in my local?

I already synced with rsync the files from a remote server to my local.
When I run the rsync, I excluded several folders and after the sync, I removed some files from the local version.
How can I resync, so that I can have the latest version, of the files I have in local and ignore the one I deleted from my local and the one I excluded since the beginning?
Instead of just deleting the local files you could additionally add them to an rsync exclude list. A wrapper for the remove command could do that automatically.

Create file on unmounted folder

What happen if we try to create a file on a unmounted folder?
Does the file is created on local system?
I have a mounted folder that could sometimes unmount for reasons
I'm going to schedule a oracle procedure that writes a file inside there, what will happen if somehow the folder is unmounted
Does the file is created on local system?
Yes. Or, better, the file is created into that directory (don't call folder unix directories... :)), wherever that directory is (it could be part of a mounted remote file system, in that case it was not part of the local system).
Writing permissions on the directory play a role, and the mount command can also play with those permissions, but this is another, complicated matter.

Using rsync to deploy code updates for Symfony application

I have a couple of development machines that I code my changes on and one production server where I have deployed my Symfony application. Currently my deployment process is tedious and consists of the following workflow:
Determine the files changed in the last commit:
svn log -v -r HEAD
FTP those files to the server as the regular user
As root manually copy those files to their destination and, if required because the file is new, change the owner to the apache user
The local user does not have access to the apache directories which is why I must use root. I'm always worried that something will go wrong either due to a forgotten file during the FTP or the copy to the apache src directory.
I was thinking that instead I should FTP the entire Symfony app/ and src/ directories along with composer.json to the server as the regular user then come up with a script using rsync to sync all of the files.
New workflow would be:
FTP app/ src/ composer.json to the server in the local user's project directory
Run the sync script to sync the files
clear the cache
Is this a good solution or is there something better for Symfony projects?
This question is similar and gives an example of the rsync, but the pros and cons of this method are not discussed. Ideally I'd like to get the method that is the most reliable and easy to setup preferably without the need to install new software.
Basically every automated solution would be better than rsync or ftp. There are multiple things to do as you have mentioned: copy files, clear cache, run migrations, generate assets, list goes on.
Here you will find list of potential solutions.
http://symfony.com/doc/current/cookbook/deployment/tools.html#using-build-scripts-and-other-tools
From my experience with symfony I can recommend capifony, it takes a while to understand it, but it pays off

Trouble designing grunt workflow, with rev and usemin, for webdev

I'm going to start using grunt-rev & grunt-usemin with grunt-watch for my web development needs (a RESTful Web App specifically).
I have a local development machine which will run grunt-watch to attach revision identifiers on my JS files. I git commit and git push my tree to a git repo, and then ask the production server to git pull the changes from the git repo to show them to the web visitors.
The problem is that I don't want my git repo to store different filenames (due to grunt-rev) on each commit. That would be bad, because then I wouldn't be able to do git diff between commits without having my screen get flooded with the contents of files that appear and disappear, and also it could (sometimes) take up a lot more storage than if it only stored the small diffs of the files.
The only solution I see is to add the build directory containing the versioned filenames in my .gitignore, so as to not store those files (with the constantly changing filenames) in git. But wouldn't that mean that I would have to run grunt-watch on my production server as well, in order to produce the build directory with the versioned filenames there as well? But that gets complicated: a new process has to run on the remote server, maybe with its small chances of error in processing the files. Not the solution I was hoping for.
Do you people have another solution? What would you suggest I did?
What I do to solve this, is to remove previous "build" files before committing and deploying a new file. There is no need to keep older files that have been generated, because you can always rebuild them with the source files (which are in git).

Unix check if a file referenced by any symlinks

How to check if a file is being referenced by any symlinks in the directory - I want to delete all the other files except the symlink and the refernced file. Is there any direct command to check or a work around to do so?
If the symbolic link is in the same directory or in a well known one, that would be easy. Just check if no other file share the same inode ls -d1Li.
Otherwise, there is no direct way to know if a symbolic link exist to any given file. Even exploring all mounted file systems wouldn't be reliable, as the link might exist on a currently unmounted filesystem, or on a remote machine accessing the file remotely (NFS, CIFS and the likes).

Resources