SFTP - remove files - console

I have a batchfile with SFTP instruction to download txt files (cron job), basically: get *.txt
Wondering what the best method is to delete those files after the server has downloaded them. The only problem being that the directory is constantly being updated with new files, so running rm *.txt afterwards won't work.
I've thought of a couple complex ways of doing this, but no command line based methods. So, I thought I'd shoot a query out to you guys, see if there's something I haven't thought of yet.

I suggest to make a list of all the files that were downloaded and then issue ftp delete/mdelete commands with the exact file names.

Related

Is there any command to know which script.sh produced certain file in UNIX?

For example, I got in the same file like script1.sh , script2.sh then I have an output.vcf (bioinformatics stuff but is guess it doesnt matter). Then I am sure one of those scripts created the output file but i don't know which of them.
Is there any way to figure it out?
Thank you!
IMHO post factum you can't get this information. But each UNIX have own audit subsystem and if you activate it you can get which file operation (in this case file creation) is done by which program (shell script).
Actually there is a way. You can browse the scripts and search for the filename in question. There will be a problem if both scripts have this filename.

Rsync - How to display only changed files

When my colleague and I upload a PHP web project to production, we use rsync for the file transfer with these arguments:
rsync -rltz --progress --stats --delete --perms --chmod=u=rwX,g=rwX,o=rX
When this runs, we see a long list of files that were changed.
Running this 2 times in a row, will always show the files that were changed between the 2 transfers.
However, when my colleague runs the same command after I did it, he will see a very long list of all files being changed (though the contents are identical) and this is extremely fast.
If he uploads again, then again there will be only minimal output.
So it seams to me that we get the correct output, only showing changes, but if someone else uploads from another computer, rsync will regard everything as changed.
I believe this may have something to do with the file permissions or times, but would like to know how to best solve this.
The idea is that we only see the changes, regardless who does the upload and in what order.
The huge file list is quite scary to see in a huge project, so we have no idea what was actually changed.
PS: We both deploy using the same user#server as target.
The t in your command says to copy the timestamps of the files, so if they don't match you'll see them get updated. If you think the timestamps on your two machines should match then the problem is something else.
The easiest way to ensure that the timestamps match would be to rsync them down from the server before making your edits.
Incidentally, having two people use rsync to update a production server seems error prone and fragile. You should consider putting your files in Git and pushing them to the server that way (you'd need a server-side hook to update the working copy for the web server to use).

How do I Download efficiently with rsync?

A couple of questions related to one theme: downloading efficiently with Rsync.
Currently, I move files from an 'upload' folder onto a local server using rsync. Files to be moved are often dumped there, and I regularly run rsync so the files don't build up. I use '--remove-source-files' to remove files that have been transferred.
1) the '--delete' options that remove destination files have various options that allow you to choose when to remove the files. This would be handly for '--remove-source-files' since is seems that, by default, rsync only removes the files after all files have been transferred, rather than after each file; Othere than writing a script to make rsync transfer files one-by-one, is there a better way to do this?
2) on the same problem, if a large (single) file is transferred, it can only be deleted after the whole thing has been sucessfully moved. It strikes me that I might be able to use 'split' to split the file up into smaller chunks, to allow each to be deleted as the file downloads; is there a better way to do this?
Thanks.

Rsynch and SSH: Only rename folder when renamed from source

I have been reading the rsync documentation for a few hours, but I can't figure out how to convey to rsync how to only rename (and not re-upload folder and it's content) destination folders when they are renamed at the source.
I'm connecting to the destination with SSH, and the local folder is the source -- and the remote server is the destination. If I rename a folder containing files, rsync automatically re-uploads all the content of the source folder. I'm not using the rsync's server part, maybe it will works if were to do that ?
I have encountered the same behavior with lftp, and this tool doesn't seem's to have these options. Even if it is based on the file's date rule, files inside the renamed folder are removed/re-uploaded.
Thanks in advance if someone knows how to manage this :)
I've been looking for something similar.
so far, the best solution I have found is at:
http://serenadetoacuckooo.blogspot.com/2009/07/rsync-and-directory-renaming.html
It basically mentions including a meta-file in each folder that indicates the folder's name.
Essentially, you would want to check that file with the directory name, and rsync only if they are the same (otherwise, issue a remote rename command.)
It depends on the scope of what you're using rsync for, but I hope that this information can help you.
How would rsync or any other program know what constitutes renamed? What if two directories are very similar candidates and somehow rsync guesses maybe either one could be a rename of what went before? It's not possible. I think you're stuck with uploading everything again.
You know about the --delete option, right:
--delete delete files that don't exist on the sending side
Note also the --force option:
--force force deletion of directories even if not empty

rsync list of specific local files in 1 step

I'm working on a web application where a user uploads a list of files, which should then be immediately rsynced to a remote server. I have a list of all the local files that need to be rsynced, but they will be mixed in with other files that I do not want rsynced every time. I know rsync will only send the changed files, but this directory structure and contents will grow very large over time and the delay would not be acceptable.
I know that doing a remote rsync, I can specify a list of remote files, i.e...
rsync "host:/path/to/file1 /path/to/file2 /path/to/file3"
... but that does not work once I remove "host:" and try to specify the files locally.
I also know I can use --files-from, but that would require me to create a file ahead of time with a list of files that I want to rsync (and then delete it afterwards). I think it'd be cleaner to just effectively say "rsync these 4 specific files to this remote server", but I can't seem to get that to work.
Is there any way to do what I'm trying to accomplish, or do I have to resort to creating a tmp file with a list in it?
Thanks!
You should be able to list the files similar to the example you gave. I did this on my machine to copy 2 specific files from a directory with many other files present.
rsync test.sql test2.cpp myUser#myHost:path/to/files/synced/

Resources