I'm struggling with this question and did not found information in google, that's why I can't understand what are they talking about.
Please follow,
I was given a https address and a port, this address run a (as they called and I copy here) "TPC/IP using SSL. It's a "real time" protocol, that will give you real time data of the channels you subscribe"
So, I was given a document where they specify how to connect to some channels. It reads:
This protocol is based on JSON format. The default port for this application is XXXXX, and the connection will be established using a SSL TCP-IP connection.
Commands sent and received have the following format:
command-id:[message-id[+]]:[channel]:[data]
command-id = Valid commands are LOGIN, JOIN, LEAVE, PING , ACK , ERROR, REPLY, CMD , JSON (mandatory)
[message-id] = Identification for message (optional on client)
[channel] = channel name for command (optional on client)
[data] = Json formated data <-- (this data I copy an example below) (mandatory)
All the commands use a \r\n (CR + LF) at the end of each line
Example of [data] = {“user”:”XX”, , ”password”:”YYY”, app”:”ZZZ”, “app_ver”:”zzz”
”protocol”:”xxx”,”protocol_ver”:”xxxx”}
Also, I provide an example of a complete command:
LOGIN:::{"user":"myname","password":"mypassword","app":"Manual Test", "app_ver":"1.0.0" ,
"protocol":"CustomProtocolABCD", "protocol_ver":"1.0.0"}
Here is what I tried:
Postman, forming the commands, message and data with Parameters, with Body, URL Encoding , everything. I only get "Error: Parse Error: Expected HTTP/" from postman
CURL, tried this also and it prompt me with this odd message "Expected HTTP 1.1 but response is HTTP 0.9". Ok I force this to --http0.9 and I finally get a response with a similar shape:
ERROR:1::{"reason":"Wrong number of parameters"}
Here is the question, How should I test it to send the right amount of parameters the server is expecting? I will provide here below mi CURL with credentials erased of course.
curl https://eu.xxxxxxxxxx.com:11001 -H 'Content-Type: application/json' -k --verbose -d 'command-id:LOGIN' -d 'message-id:100' -d 'channel:' --http0.9 -d 'data:{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}' -i
NOTE: The password contain a "%" symbol, I don't know if this is generating a problem in the encode but I'm very lost here.
Can someone help me pointing any documentation about this kind of communication? I'm really supposed to make an app consuming this information in a embedded device (Qt Creator) but I can't test the endpoint to receive initial json data and then program the app.
Any help is welcomed. Thanks and excuse my english if I made some mistake.
Thanks again
Your data is not a valid JSON string.
This is the post data you are posting:
command-id:LOGIN&message-id:100&channel:&data:{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}
This part of the post data is valid JSON:
'{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}
But your header says it the whole body of the packet is Content-Type: application/json
Your actual content is mixed data where the one field data is JSON.
This is the non-JSON part:
command-id:LOGIN&message-id:100&channel:&data:
That looks like a mishmash of post data and JSON combined.
I can only guess but I would think data should look like this:
-d '{"command-id":"LOGIN","message-id":100,"channel":{"user":"XXXXXXX","password":"xxxxxxx","app":"ManualTest","app_ver":"1.0.0","protocol":"CustomProtocolABCD","protocol_ver":"1.0.0"}}'
Which translates to this:
obj(
'command-id' => 'LOGIN',
'message-id' => 100,
'channel' =>
obj(
'user' => 'XXXXXXX',
'password' => 'xxxxxxx',
'app' => 'ManualTest',
'app_ver' => '1.0.0',
'protocol' => 'CustomProtocolABCD',
'protocol_ver' => '1.0.0',
),
)
But I am thinking curl is not a valid protocol for this TCP/IP packet.
You might be able to use curl if you send a body with no header.
Where the body is an SSL encoded packet.
What programming language[s] do you use?I think I could do this in PHP with sockets and a little better documentation.
The solution is to use
openssl s_client -c connect host.com:11111
Additionally I had to use \" instead of only " to overcome json encoding in my server.
It was indeed a SSL socket over TCP IP
In Qt I'm using QSslSocket class
Additional information for whoever needs it, this kind of sockets use to have a ping routine where you need to ping the server with a custom command to keep the connection alive.
In Qt I'm programming threads, and one of them will be used to ping the server once connection is established.
I am working with invokehttp module and need to make a POST request. It has three parameters: -H, -d, -F.
-H meanings are transmitted through a couple of attributes and its meaning.
-d - through flowfile content in a necessary view.
How do I transmit -F parameter? I want to use rest api rocket.chat on nifi.
It sounds like you are discussing curl flags, not HTTP-specific request values. For the record, the -F flag overrides -d in curl commands.
If you are attempting multipart/form-data uploads, you may be interested in the work done in NIFI-7394 to improve that handling in the InvokeHTTP processor.
I'm implementing a minimum HTTPS layer for my embedded project where I'm using mbedTLS for TLS and hard-coding HTTP headers to talk with HTTPS servers.
It works fine with normal websites. But so far my implementation detects the end of HTTPS response by checking if the last byte read is \n.
if( ret > 0 && output[len-1] == '\n' )
{
ret = 0;
output[len] = 0;
break;
}
This, however, is not always working for obvious reason. I tried openssl s_client, and it behaves the same - if an HTTP response terminates with \n, then s_client returns immediately after fetching all data. Otherwise it blocks forever, waiting for more data.
An real browser seems to be able to handle this properly. Is there anything I can do beyond setting a timeout?
How to tell if an HTTP response terminates in C...
But so far my implementation detects the end of HTTPS response by checking if the last byte read is \n...
This, however, is not always working for obvious reason...
HTTP calls out \r\n, and not \n. See RFC 2616, Hypertext Transfer Protocol - HTTP/1.1 and page 15:
HTTP/1.1 defines the sequence CR LF as the end-of-line marker for all
protocol elements except the entity-body (see appendix 19.3 for
tolerant applications). The end-of-line marker within an entity-body
is defined by its associated media type, as described in section 3.7.
CRLF = CR LF
Now, what various servers send is a whole different ballgame. There will be duplicate end-of-line markers, missing end-of-line markers, and incorrect end-of-line markers. Its the wild, wild west.
You might want to look at a reference implementation of a HTTP parser. If so, check out libevent or cURL's parsers and how they maintain their state machine.
I'm pulling data from a server but need to know the type of data before I pull it. I know I can look at content-type in the response header, and I've looked into using
curl --head http://x.com/y/z
however some servers do not support the "HEAD" command (I get a 501 not implemented response).
Is it possible to somehow do a GET with curl, and immediately disconnect after all headers have been received?
Check out the following answer:
https://stackoverflow.com/a/5787827
Streaming. UNIX philosphy and pipes: they are data streams. Since curl and GET are unix filters, ending the receiving pipe (dd) will terminate curl or GET early (SIGPIPE). There is no telling whether the server will be smart enough to stop transmission. However on a TCP level I suppose it would stop retrying packets once there is no more response. #sehe
Using this method you should be able to download as many bytes as you want, and then cancel the request. You could also work some magic to terminate after receiving a blank line, which would mean the end of the header.
I have a program already written in gawk that downloads a lot of small bits of info from the internet. (A media scanner and indexer)
At present it launches wget to get the information. This is fine, but I'd like to simply reuse the connection between invocations. Its possible a run of the program might make between 200-2000 calls to the same api service.
I've just discovered that gawk can do networking and found geturl
However the advice at the bottom of that page is well heeded, I can't find an easy way to read the last line and keep the connection open.
As I'm mostly reading JSON data, I can set RS="}" and exit when body length reaches the expected content-length. This might break with any trailing white space though. I'd like a more robust approach. Does anyone have a nicer way to implement sporadic http requests in awk that keep the connection open. Currently I have the following structure...
con="/inet/tcp/0/host/80";
send_http_request(con);
RS="\r\n";
read_headers();
# now read the body - but do not close the connection...
RS="}"; # for JSON
while ( con |& getline bytes ) {
body = body bytes RS;
if (length(body) >= content_length) break;
print length(body);
}
# Do not close con here - keep open
Its a shame this one little thing seems to be spoiling all the potential here. Also in case anyone asks :) ..
awk was originally chosen for historical reasons - there were not many other language options on this embedded platform at the time.
Gathering up all of the URLs in advance and passing to wget will not be easy.
re-implementing in perl/python etc is not a quick solution.
I've looked at trying to pipe urls to a named pipe and into wget -i - , that doesn't work. Data gets buffered, and unbuffer not available - also I think wget gathers up all the URLS until EOF before processing.
The data is small so lack of compression is not an issue.
The problem with the connection reuse comes from the HTTP 1.0 standard, not gawk. To reuse the connection you must either use HTTP 1.1 or try some other non-standard solutions for HTTP 1.0. Don't forget to add the Host: header in your HTTP/1.1 request, as it is mandatory.
You're right about the lack of robustness when reading the response body. For line oriented protocols this is not an issue. Moreover, even when using HTTP 1.1, if your scripts locks waiting for more data when it shouldn't, the server will, again, close the connection due to inactivity.
As a last resort, you could write your own HTTP retriever in whatever langauage you like which reuses connections (all to the same remote host I presume) and also inserts a special record separator for you. Then, you could control it from the awk script.