I want to move files from local folder to remote URL in a scheduled way using WebDAV.
I found this URL useful but this shows the script to transfer only single file, instead I want to transfer all files from a local folder to remote URL through winscp using WebDAV protocol:
http://winscp.net/eng/docs/guide_automation
Any pointers for this would be helpful.
Use the following WinSCP script:
open http://user:password#example.com/
put d:\path\* /home/user/
close
Read about file masks.
If you really need to move files (as opposite to copy), add the -delete switch to the put command:
put -delete d:\path\* /home/user/
For scheduling, see the WinSCP guide to scheduling file transfers.
Related
When developing for asp.net using visual studio for web, it is really convenient that when you deploy your website to the web server, it will cleverly check which files have been changed and only upload those files. In addition, if you deleted some files from your source, it detects that too and deletes those files from the web server since they are no longer needed.
I started developing with the LAMP stack and am wondering how you can deploy to a web server in a similar way.
I tried using Filezilla and on copy/pasting the source files to the web server, you have these options if there are similar files:
-Overwrite
-Overwrite if source is newer
-Overwrite if different size
-Overwrite if different size or source newer
"Overwrite if source is newer" works, kind of, but it only checks the date modified, not the content of the file. Also, the above method does not delete files from the web server that were deleted from the source.
Is there a better way to do this with Filezilla? (or maybe use some other program?)
Thanks.
You can use rsync to accomplish this.
When you want to push out changes you would do something like this form your production server.
rysnc -av user#<developmentIp>:/web/root/* /production/web/root/
The pattern is rsync --flags [user#host:]/source/dir [user#host:]/destination/dir
You only need the user#host stuff for remote hosts. The user must have ssh access to the host.
Couple little suggestions.
The command can be run from the source or destination. I find it better to run the command from the destination, for permissions issues (i.e. your reading from the remote and writing to the local)
Do some tests first, I always mix up the directory stuff; do I need the end slash, should I use the star, ...
Read the man page, there are alot of available options that may be helpful (--delete, --exclude, -a)
I have a file on a remote server that I can connect to via ssh. I would like to copy the file from the remote server at the path: '/home/example.txt' to my computer's desktop.
Should I be using wget, sftp, ftp, or simply rm?
Bonus points if you know a good resource for UNIX documentation, since google's results were not great.
Use scp(1):
~$ scp user#host:/home/example.txt .
None of them, you should use scp
scp or rcp if you don't want/need to be secured. Or alternatively rsync will be nicer for anything more complicated with remote file transferring on a regular basis.
For the documentation, look at the man pages, ex: man scp. Or online at
http://linux.die.net/man/1/scp
http://linux.die.net/man/1/rcp
http://linux.die.net/man/1/rsync
Try Filezilla Utility
More info on http://filezilla-project.org/
I have a rsync client which pushes all changes to the server. Suppose I change already existing copy on the server and do a rsync from my rsync client. The client is not updating the changed copy in the server i.e. it is unable to see the change i have made in the server.
I am using rsync with the following options:
-progu
How to make the client see the changed copy and update it?
Let's use different terms. Source and Target make more sense for this. You have a server that is normally your Target. Now you've made changes to files on the server that you'd like reflected in Source.
What you're asking to do is reverse the roles of Source and Target in order to update this file.
The -u option already tells rsync "skip files that are newer on the receiver". So you may be safe if you simply run an rsync in the other direction -- from your tradition target to your traditional source. Files that are newer on your "client" won't be updated (because of -u); only the file that is newer should be updated.
Test this with -v -n options before running it "for real".
Requirement
Several files in remote machines ought to be deleted via sh. Name of the files to be deleted are know
Approach
1) script was written with ftp (requires credential) and delete command. File names were passed as array(iterated via for loop-with ftp+delete commands enclosed within for loop). files were not getting deleted by this approach
2) another approach attempted was to pass temp.ftp(which contains delete command) to ftp command and rm the temp.ftp file eg.ftp <
Request
require pointers to delete muliple files in remote machine via shell script
I recommend using ssh instead of ftp for interfacing with the remote unix machine.
SSH allows you to run remote commands easily and securely.
Read this article for more info.
I just want to transfer a file from ftp server to unix folder, --this is stright forward.
if the file doesn't exist on the ftp server, then the script needs to run recursively until it finds the file. Please let me know how do i get that file.
please remember script has to run on ftp server.
Thanks
CK
I'd mount the FTP server with curlftpfs http://curlftpfs.sourceforge.net and then use it like it were a local file system — for example, run find(1).
You need to write a program to automate your FTP session. You can either write your own custom FTP client, not that hard if you know a few things about network programming, or write a script to automate a session for an existing client. For the latter approach, I suggest using Expect if you are proficient with TCL, or PyExpect if you prefer Python. Expect is a library designed to automate interactive tasks like downloading a file with FTP.