I am working with invokehttp module and need to make a POST request. It has three parameters: -H, -d, -F.
-H meanings are transmitted through a couple of attributes and its meaning.
-d - through flowfile content in a necessary view.
How do I transmit -F parameter? I want to use rest api rocket.chat on nifi.
It sounds like you are discussing curl flags, not HTTP-specific request values. For the record, the -F flag overrides -d in curl commands.
If you are attempting multipart/form-data uploads, you may be interested in the work done in NIFI-7394 to improve that handling in the InvokeHTTP processor.
Related
I'm struggling with this question and did not found information in google, that's why I can't understand what are they talking about.
Please follow,
I was given a https address and a port, this address run a (as they called and I copy here) "TPC/IP using SSL. It's a "real time" protocol, that will give you real time data of the channels you subscribe"
So, I was given a document where they specify how to connect to some channels. It reads:
This protocol is based on JSON format. The default port for this application is XXXXX, and the connection will be established using a SSL TCP-IP connection.
Commands sent and received have the following format:
command-id:[message-id[+]]:[channel]:[data]
command-id = Valid commands are LOGIN, JOIN, LEAVE, PING , ACK , ERROR, REPLY, CMD , JSON (mandatory)
[message-id] = Identification for message (optional on client)
[channel] = channel name for command (optional on client)
[data] = Json formated data <-- (this data I copy an example below) (mandatory)
All the commands use a \r\n (CR + LF) at the end of each line
Example of [data] = {“user”:”XX”, , ”password”:”YYY”, app”:”ZZZ”, “app_ver”:”zzz”
”protocol”:”xxx”,”protocol_ver”:”xxxx”}
Also, I provide an example of a complete command:
LOGIN:::{"user":"myname","password":"mypassword","app":"Manual Test", "app_ver":"1.0.0" ,
"protocol":"CustomProtocolABCD", "protocol_ver":"1.0.0"}
Here is what I tried:
Postman, forming the commands, message and data with Parameters, with Body, URL Encoding , everything. I only get "Error: Parse Error: Expected HTTP/" from postman
CURL, tried this also and it prompt me with this odd message "Expected HTTP 1.1 but response is HTTP 0.9". Ok I force this to --http0.9 and I finally get a response with a similar shape:
ERROR:1::{"reason":"Wrong number of parameters"}
Here is the question, How should I test it to send the right amount of parameters the server is expecting? I will provide here below mi CURL with credentials erased of course.
curl https://eu.xxxxxxxxxx.com:11001 -H 'Content-Type: application/json' -k --verbose -d 'command-id:LOGIN' -d 'message-id:100' -d 'channel:' --http0.9 -d 'data:{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}' -i
NOTE: The password contain a "%" symbol, I don't know if this is generating a problem in the encode but I'm very lost here.
Can someone help me pointing any documentation about this kind of communication? I'm really supposed to make an app consuming this information in a embedded device (Qt Creator) but I can't test the endpoint to receive initial json data and then program the app.
Any help is welcomed. Thanks and excuse my english if I made some mistake.
Thanks again
Your data is not a valid JSON string.
This is the post data you are posting:
command-id:LOGIN&message-id:100&channel:&data:{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}
This part of the post data is valid JSON:
'{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}
But your header says it the whole body of the packet is Content-Type: application/json
Your actual content is mixed data where the one field data is JSON.
This is the non-JSON part:
command-id:LOGIN&message-id:100&channel:&data:
That looks like a mishmash of post data and JSON combined.
I can only guess but I would think data should look like this:
-d '{"command-id":"LOGIN","message-id":100,"channel":{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}}'
Which translates to this:
obj(
'command-id' => 'LOGIN',
'message-id' => 100,
'channel' =>
obj(
'user' => 'XXXXXXX',
'password' => 'xxxxxxx',
'app' => 'ManualTest',
'app_ver' => '1.0.0',
'protocol' => 'CustomProtocolABCD',
'protocol_ver' => '1.0.0',
),
)
But I am thinking curl is not a valid protocol for this TCP/IP packet.
You might be able to use curl if you send a body with no header.
Where the body is an SSL encoded packet.
What programming language[s] do you use?I think I could do this in PHP with sockets and a little better documentation.
The solution is to use
openssl s_client -c connect host.com:11111
Additionally I had to use \" instead of only " to overcome json encoding in my server.
It was indeed a SSL socket over TCP IP
In Qt I'm using QSslSocket class
Additional information for whoever needs it, this kind of sockets use to have a ping routine where you need to ping the server with a custom command to keep the connection alive.
In Qt I'm programming threads, and one of them will be used to ping the server once connection is established.
I have been struggling to replicate an issue we are facing in Production. The clients are sending multiple headers with the same name via a cookie and we are trying to troubleshoot the same via CURL.
The intent is to send TWO header values for the same header name so that the application (below as myhost) can intercept it via this curl attempt. However, when I attempt something like this, the server, the "x-targetted-group" value doesn't resolve. IF I send TWO headers using -H "X-targetted-group:Group1" - "x-targetted-group:Group2", the server only gets the first one. How can i send both ?
curl -i -H "Accept: application/json" -H "x-targetted-group:Group1,Group2" https://myhost:8990/"
curl won't let you. So answer is you can't. Later version of wget won't either.
If you want to experiment with odd possibly malformed HTTP requests, you can just craft your own - it's all just plain text. Example using netcat:
> cat request.txt # I.e. the contents of the file request.txt is:
GET /
Accept: application/json
X-targetted-group: Group1
X-targetted-group: Group2
> nc myhost 8990 <request.txt
The HTTP spec says lines have to end in CRLF (\r\n) so the above might not be accepted by your server unless the text file request.txt uses CRLF line termination (there is an option for saving like that in text editors ..).
Aside: What HTTP spec says about multiple headers with the same name (they are allowed):
Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list [i.e., #(values)]. It MUST be possible to combine the multiple header fields into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field-value to the first, each separated by a comma. The order in which header fields with the same field-name are received is therefore significant to the interpretation of the combined field value, and thus a proxy MUST NOT change the order of these field values when a message is forwarded.
I use to perform a lot of bad queries syntax attacks on HTTP servers. By definition curl or wget wont let you do much bad syntax work.
You should try to use low level netcat + printf.
With printf you write your HTTP query, and netcat will manage the socket connection (for ssl connections you can replace netcat with openssl_client).
That would look like (for a basic query):
printf 'GET /my/url?foo=bar HTTP/1.1\r\n'\
'Host: www.example.com\r\n'\
'\r\n'\
| nc -q 2 127.0.0.1 80
And for a more complex one (repeated header & old ops-fold header syntax, not also how to write a %character in printf):
printf 'GET /my/url?foo=bar&percent_char=%% HTTP/1.1\r\n'\
'Host: www.example.com\r\n'\
'x-foo-header: value1\r\n'\
'x-foo-header: value2\r\n'\
'x-foo-header: value3, value4\r\n'\
'x-foo-header:\t\tval5\r\n'\
' val6\r\n'\
'User-agent: tests\r\n'\
'\r\n'\
| nc -q 2 127.0.0.1 80
Once you get used of it it's a pleasure, no limitations.
This is a limitation of the HTTP protocol itself. You are not allowed to send multiple headers with the same name unless they are sent in the same key as a comma separated list of values. Take a look at this answer.
How can I send different HTTP request methods over my browser? I am using chrome, but any other would do.
For example, I would like to try out TRACE or OPTIONS methods for educational purposes. Any idea how I can do that?
Example:
request message:
OPTIONS * HTTP/1.1
Host: www.joes-hardware.com
Accept: *
response message:
HTTP/1.1 200 OK
Allow: GET, POST, PUT, OPTIONS
Context-length: 0
Browsers themselves do not issue any requests with verbs (read: methods) other than GET, POST, and HEAD. By the powers of ajax though, they can be made to use a wealth of other methods through the XmlHttpRequest object. However, you will be out of luck with the TRACE verb:
If method is a case-sensitive match for CONNECT, TRACE, or TRACK, throw a "SecurityError" exception and terminate these steps.
If you do not want or do not need to be bound to a browser, there are quite a few options. For starters, Perl's libwww library comes with the GET, HEAD, and POST commandline utilities that are quite neat to use.
A more complete solution is cURL, which is a pretty complete solution for a multitude of protocols. Its original purpose has been to simply catch a file from an URL (catch URL = cURL) which does not necessarily mean from an HTTP server. With a well-formed URL, cURL can download an attachment from an e-mail on an IMAP server. You will be most interested into the -X option of cURL's commandline interface, which allows you to specify arbitrary verbs for an HTTP request. But as mighty as it may be, there will probably be no way to issue that OPTIONS * HTTP/1.1 request with it.
As a last-ditch effort, I can wholeheartedly recommend netcat, which accepts piped input and is fully able to handle encryption (which is way more comfortable than openssl's s_client). You may already know that you can emulate HTTP requests over telnet (if you type fast enough). But I believe you will find netcat with some heredoc way more comfortable:
$ nc -v localhost 80 <<EOD
GET / HTTP/1.1
Host: localhost
Connection: close
EOD
netcat speaks no HTTP itself, so you alone are responsible for the syntactical correctness of your requests. On the other hand, this allows you total freedom to experiment around.
How can I port this wget stuff to scala:
wget --keep-session-cookies --save-cookies cookies.txt --post-data 'password=xxxx&username=zzzzz' http://server.com/login.jsp
wget --load-cookies cookies.txt http://server.com/download.something
I want to write a tiny, portable script, no external libraries etc.
Can that be done easily ?
Your two main requirements appear to be:
Auth with some body text
Maintain the session cookies between requests.
Since Scala itself doesn't have much support for HTTP in the core library besides scala.io.Source, you're pretty much stuck with HttpUrlConnection from the Java itself. Looks like this site already has some examples of using HttpUrlConnection in ways like this:
Reusing HttpURLConnection so as to keep session alive
I have a program already written in gawk that downloads a lot of small bits of info from the internet. (A media scanner and indexer)
At present it launches wget to get the information. This is fine, but I'd like to simply reuse the connection between invocations. Its possible a run of the program might make between 200-2000 calls to the same api service.
I've just discovered that gawk can do networking and found geturl
However the advice at the bottom of that page is well heeded, I can't find an easy way to read the last line and keep the connection open.
As I'm mostly reading JSON data, I can set RS="}" and exit when body length reaches the expected content-length. This might break with any trailing white space though. I'd like a more robust approach. Does anyone have a nicer way to implement sporadic http requests in awk that keep the connection open. Currently I have the following structure...
con="/inet/tcp/0/host/80";
send_http_request(con);
RS="\r\n";
read_headers();
# now read the body - but do not close the connection...
RS="}"; # for JSON
while ( con |& getline bytes ) {
body = body bytes RS;
if (length(body) >= content_length) break;
print length(body);
}
# Do not close con here - keep open
Its a shame this one little thing seems to be spoiling all the potential here. Also in case anyone asks :) ..
awk was originally chosen for historical reasons - there were not many other language options on this embedded platform at the time.
Gathering up all of the URLs in advance and passing to wget will not be easy.
re-implementing in perl/python etc is not a quick solution.
I've looked at trying to pipe urls to a named pipe and into wget -i - , that doesn't work. Data gets buffered, and unbuffer not available - also I think wget gathers up all the URLS until EOF before processing.
The data is small so lack of compression is not an issue.
The problem with the connection reuse comes from the HTTP 1.0 standard, not gawk. To reuse the connection you must either use HTTP 1.1 or try some other non-standard solutions for HTTP 1.0. Don't forget to add the Host: header in your HTTP/1.1 request, as it is mandatory.
You're right about the lack of robustness when reading the response body. For line oriented protocols this is not an issue. Moreover, even when using HTTP 1.1, if your scripts locks waiting for more data when it shouldn't, the server will, again, close the connection due to inactivity.
As a last resort, you could write your own HTTP retriever in whatever langauage you like which reuses connections (all to the same remote host I presume) and also inserts a special record separator for you. Then, you could control it from the awk script.