There is a directory that is being served over the net which I'm interested in monitoring. Its contents are various versions of software that I'm using and I'd like to write a script that I could run which checks what's there, and downloads anything that is newer that what I've already got.
Is there a way, say with wget or something, to get a a directory listing. I've tried using wget on the directory, which gives me html. To avoid having to parse the html document, is there a way of retrieving a simple listing like ls would give?
I just figured out a way to do it:
$ wget --spider -r --no-parent http://some.served.dir.ca/
It's quite verbose, so you need to pipe through grep a couple of times depending on what you're after, but the information is all there. It looks like it prints to stderr, so append 2>&1 to let grep at it. I grepped for "\.tar\.gz" to find all of the tarballs the site had to offer.
Note that wget writes temporary files in the working directory, and doesn't clean up its temporary directories. If this is a problem, you can change to a temporary directory:
$ (cd /tmp && wget --spider -r --no-parent http://some.served.dir.ca/)
What you are asking for best served using FTP, not HTTP.
HTTP has no concept of directory listings, FTP does.
Most HTTP servers do not allow access to directory listings, and those that do are doing so as a feature of the server, not the HTTP protocol. For those HTTP servers, they are deciding to generate and send an HTML page for human consumption, not machine consumption. You have no control over that, and would have no choice but to parse the HTML.
FTP is designed for machine consumption, more so with the introduction of the MLST and MLSD commands that replace the ambiguous LIST command.
The following is not recursive, but it worked for me:
$ curl -s https://www.kernel.org/pub/software/scm/git/
The output is HTML and is written to stdout. Unlike with wget, there is nothing written to disk.
-s (--silent) is relevant when piping the output, especially within a script that must not be noisy.
Whenever possible, remember not to use ftp or http instead of https.
If it's being served by http then there's no way to get a simple directory listing. The listing you see when you browse there, which is the one wget is retrieving, is generated by the web server as an HTML page. All you can do is parse that page and extract the information.
AFAIK, there is no way to get a directory listing like that for security purposes. It is rather lucky that your target directory has the HTML listing because it does allow you to parse it and discover new downloads.
You can use IDM (internet download manager)
It has a utility named "IDM SITE GRABBER" input the http/https URLs and it will download all files and folders from http/https protocol for you.
elinks does a halfway decent job of this. Just elinks <URL> to interact with a directory tree through the terminal.
You can also dump the content to the terminal. In that case, you may want flags like --no-references and --no-numbering.
Related
I've got dir per environment on receiver with links to some files.
This is because, some files are shared between environment.
So what I would like, when I'm doing a rsync from my remote host to my receiver, that the retrieve files are place following the links. But currently, my rsync replace my local links by the retrived files.
Is there a way to tell rsync to follow links on receiver hosts?
If those links are pointing to directories, you cas use -K option which works flawlessly.
Other thing is when you have on the receiver links pointing to files (not dirs).
I am afraid, currently there is no simple way how to preserve the links in the destination and amend the files they are pointing to with the contents of local links/files those point to.
You might be interested in the -L option if you are sending files links from source but want to copy contents they point to rather than links themselves. However, this would also remove the corresponding links in the receive destination and as mentioned earlier, just change the files they are pointing to.
Check out https://serverfault.com/questions/245774/perform-rsync-while-following-sym-links for more information.
I am currently trying to automate the our online store so that orders from our system get put into our logistics company's server. At the moment, our orders automatically go into a folder called 'automated-orders' on our server through a wordpress plugin. I cannot get this plugin to directly interact with the logistics server.
The goal:
To get a file (.csv file) in our 'automated-orders' folder to automatically (every night) be copied from a directory on our cpanel hosted web server to a ftp location on our logistics company website. Their server requires a login and password. There are some days where there may not be any order files so in this case it will just do nothing. Ideally, it will scan to see if there are any new files before doing the transfer.
I have been looking through these forums and others about cron jobs and wget and wput but don't think I have the syntax right as nothing happens. This is what I have as our cron line command:
wget /home/rhinospo/public_html/automated-orders --ftp-user=RH1 --password='PASSWORD' ftp://RH1#182.50.154.233/RH1/Incoming
Could someone please see what I am doing wrong in this syntax. Alternatively, is there another/better way to achieve what I am trying to do?
Cheers
You can use curl for this chase
curl -T /home/rhinospo/public_html/automated-orders ftp://182.50.154.233/RH1/Incoming --user RH1:password
I need to get a file from a remote server and I am using the
ls -lA command to list the files inside the FTP block. However I see the "." and ".." entries being listed as well. Is there any way to omit
them and list only the files that are not hidden?
The FTP protocol has no way to control what files the server includes to the listing.
Having that said, many servers do actually support a non-standard -a switch to show hidden files. And indeed by default most FTP servers do not show hidden files neither the . and .. by default. You have to enforce it using -a.
But if your server does show the hidden files, I'm afraid there's no way to force it not to show them, from a client side. Though there can be a server-side configuration option for this, but we do not know what FTP server you are using.
Generally, if you need to do any kind of filtering, you have to do it locally after retrieving a complete directory listing.
For example:
grep -v ^.+$ listing.txt
Presumably by the files that are not hidden you mean the entries not starting with .; to list only those, just omit the A and try ls -l.
Wondering if there is a way to download the root folder plus a bunch of sub folders (and sub folders of those folders) with all the files and keep them in their respective folders.
I've tried some firefox plugins like flashgot and download-them-all but they grab the actual web files in addition to the files in the repository, but only if they are visible. For example, if I don't collapse all the folders and expose the files in the repository, the plugins won't detect them.
I would just collapse all the folders and expose the files but these plugins won't recognize the folders...they just download as "foldername".html .... and all the files are mixed together in one folder.
I've also tried visualWget and allowed recursive downloads but again, this only grabs the actual website files, not the files in the repository.
If anyone could help it'd be greatly appreciated. I've been copying them manually but there are literally thousands of files and folders so I'm looking for a quicker solution.
As a client you can only download what's accessible. You either need to know the list of files or crawl the pages for the links, which is what the Firefox plugins do.
There's no way to get a list of files on the server without access to the server beyond http (unless the server has webdav or exposes some other api).
I ended up getting it to work. I used the following command in Terminal.
scp -r username#hostaddress:/file/path/to/directory /path/to/my/computer/directory
-r is for recursive so it downloads all files and directories and subdirectors
If you try this be sure to run this command from your local terminal. I made the mistake of doing it from the SSH connection to the server (no negative effects just frustrating)
When writing a Nautilus script, $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS gives the path to the file whose context menu has been clicked, for instance /home/nico/test.txt.
But when the file is within a WebDAV share, the variable is empty.
Is it a bug?
How to get the path for a WebDAV file?
My script is intended to be used for files on WebDAV shares.
I have just found this list of variables:
https://help.ubuntu.com/community/NautilusScriptsHowto
The one I was looking for is $NAUTILUS_SCRIPT_SELECTED_URIS, it works on WebDAV too, returning for instance dav://admin#localhost:8080/alfresco/webdav/User%20Homes/leo/test.txt
Nautilus' $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS is only for LOCAL (mounted) files, and by design is blank for remote files, like $1, $2...
For REMOTE files, like WebDAV, or Samba network shares, FTP servers, (or any other location where $NAUTILUS_SCRIPT_CURRENT_URI is not like file://...), use $NAUTILUS_SCRIPT_SELECTED_URIS