curl - download only exists multiple files from commandline - unix

How download only exists files with curl via commandline? I have code like this:
curl http://host.com/photos/IMG_4[200-950].jpg -u user:pass -o IMG_4#1.jpg
This command download all images from IMG_4200.jpg to IMG_4950.jpg - even if they do not exist.

use -f
(HTTP) Fail silently (no output at
all) on server errors. This is mostly
done to better enable scripts etc to
better deal with failed attempts. In
normal cases when a HTTP server fails
to deliver a document, it returns an
HTML document stating so (which often
also describes why and more). This
flag will prevent curl from outputting
that and return error 22.
This method is not fail-safe and there
are occasions where non-successful
response codes will slip through,
especially when authentication is
involved (response codes 401 and 407).

Related

Should I always expect the IPFS objects to arrive in .tar.gz format?

As per the documentation, it is possible to specify a archive and a compress input variables, but regarless what I put there, when I try downloading something:
# Some sample file:
curl -X POST "http://localhost:5001/api/v0/get?arg=/ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/security-notes&compress=false" --output ~/Desktop/security-notes.tar.gz -vv
# Some sample directory:
curl -X POST "http://localhost:5001/api/v0/get?arg=/ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc&compress=false" --output ~/Desktop/whole-folder.tar.gz -vv
I always get the same: a tar.gz file (different, in both cases).
Actually, this is what I would expect: To get those files compressed so the workflow is always the same. I can decompress it on arrival. However, the arguments seem to be ignored at all (compress, archive). My question is: should I always expect like this (the contents always arriving as .tar.gz), and not take into account the args in the documentation?
(notes: it is documented that the Content-Type always arrives as text/plain, and this is true in the implementation - I cannot play with the presence of such header)
IPFS version: 0.12.0-rc1

GET command not found

I am a in a student job where I am required to do work with a DB but it really isn't my domain.
In the Documentation it says to enter the line
GET /_cat/health?v
This returns the error
-bash: GET: command not found
It also proposes that I copy as curl. Then the command that works is
curl -XGET 'localhost:9200/_cat/health?v&pretty'
I can I make the command "GET /_cat/health?v" to work?
GET is a request method of the HTTP protocol. If you don't write an HTTP server or client software then you don't have to deal with it explicitly.
The command line
curl -XGET 'localhost:9200/_cat/health?v&pretty'
tells curl to request the URL http://localhost:9200/_cat/health?v&pretty using the GET request method.
GET is the default method, you don't need to specify it explicitly.
Also, the second argument you provide to curl is not an URL. curl is nice and completes it to a correct URL but other programs that expect URLs might not work the same (for various reasons). It's better to always specify complete URLs to get the behaviour you expect.
Your command line should be:
curl 'http://localhost:9200/_cat/health?v&pretty'
The apostrophes around the URL are required because it contains characters that are special to the shell (&). A string enclosed in apostrophes tells the shell to not interpret any special characters inside it.
Without the apostrophes, the shell thinks the curl command ends on & and pretty is a different command and the result is not what you expect.
Behind the scene, curl uses HTTP to connect to the server localhost on port 9200 and sends it this HTTP request:
GET /_cat/health?v&pretty
When you start working with elasticsearch, one of the first things they ask you to do to test your install is to do a GET /_cat/health?v, as shown here:
enter link description here
They fail to tell you that this will not work in a terminal, as Ravi Sharma has explained above. Maybe the elasticsearch team should clarify this a bit. At least they supply a Copy as cURL link. It is just frustrating for someone new at this.
sudo apt install libwww-perl
GET command is in package libwww-perl

CURL Command To Create A File On Server

I have a mini program/server built on one of my computers (Machine1) and I am trying to create or overwrite a file through cURL on another computer (Machine2). So Machine2 is connected to Machine1. Ive been looking through cURL's documentation for command that will do this but have had no luck and as well on stack overflow.
https://curl.haxx.se/docs/manpage.html
I have also tried the examples on this SO post:
HTTP POST and GET using cURL in Linux
Any idea as to what the command might be through command prompt? (equivalent of a POST command). I have tried so far using -O, -K, -C and a multitude of others which have not worked.
In command line, all you need to do is using curl --form to simulate a multipart/form-data POST request:
curl --form "testfile=#thefilename.jpg" http://<Machine2>/<Path>
testfile is the field name used for form, if you don't care, just use any english word.
# is used here to make file thefilename.jpg get attached in the post as a file upload. Refer to curl man doc.
In server side, URL http://<Machine2>/<Path> should be listened. When curl send the previous POST request, server side program should get it, extract the attached file (thefilename.jpg), and save to disk.

TCP network communication security risks

I am developing an application that can establish a server-client connection using QTcp*
The client sends the server a number.
The received string is checked on its length and quality (is it really a number?)
If everything is OK, then the server replies back with a file path (which depends on the sent number).
The client checks if the file exists and if it is a valid image. If the file complies with the rules, it executes a command on the file.
What security concerns exist on this type of connection?
The program is designed for Linux systems and the external command on the image file is executed using QProcess. If the string sent contained something like (do not run the following command):
; rm -rf /
then it would be blocked on the file not found security check (because it isn't a file path). If there wasn't any check about the validity of the sent string then the following command would be executed:
command_to_run_on_image ; rm -rf /
which would cause panic! But this cannot happen.
So, is there anything I should take into consideration?
If you open a console and type command ; rm -rf /*, something bad would likely happen. It's because commands are processed by the shell. It parses text output, e.g. splits commands by ; delimiter and splits arguments by space, then it executes parsed commands with parsed arguments using system API.
However, when you use process->start("command", QStringList() << "; rm -rf /*");, there is no such danger. QProcess will not execute shell. It will execute command directly using system API. The result will be similar to running command "; rm -rf /*" in the shell.
So, you can be sure that only your command will be executed and the parameter will be passed to it as it is. The only danger is the possibility for an attacker to call the command with any file path he could construct. Consequences depends on what the command does.

Why would HTTP transfer via wget be faster than lftp/pget?

I'm building software that needs to do massive amounts of file transfer via both HTTP and FTP. Often times, I get faster HTTP download with a multi-connection download accelerator like axel or lftp with pget. In some cases, I've seen 2x-3x faster file transfer using something like:
axel http://example.com/somefile
or
lftp -e 'pget -n 5 http://example.com/somefile;quit'
vs. just using wget:
wget http://example.com/somefile
But other times, wget is significantly faster than lftp. Strangly, this is even true even when I do lftp with get, like so:
lftp -e 'pget -n 1 http://example.com/somefile;quit'
I understand that downloading a file via multiple connections won't always result in a speedup, depending on how bandwidth is constrained. But: why would it be slower? Especially when calling lftp/pget with -n 1?
Is it possible that the HTTP server is compressing the stream using gzip? I can't remember if wget handles gzip Content Encoding or not. If it does, then this might explain the performance boost. Another possibility is that there is an HTTP cache somewhere in the pipeline. You can try something like
wget --no-cache --header="Accept-Encoding: identity"
and compare this to your FTP-based transfer times.

Resources