Empty payload after GET request to a server page in ESP8266 Lua - tcp

I'm getting a nil payload while trying to get data from a page using ESP8266 WiFi module using Lua.
Here's my pseudo-code :
wifi.setmode(wifi.STATION)
wifi.sta.config("SSID","password")
wifi.sta.connect()
tmr.alarm(1,10000, 1, function()
if (wifi.sta.getip() == nil) then
print("IP unavaiable, Waiting...")
else
foo()
end
end)
function foo()
print("Inside foo function"..node.heap());
conn = nil
conn=net.createConnection(net.TCP,0) -- 30 seconds timeout time of server
conn:on("receive", function(conn, payload)
-- local buf = "";
startRead = false
gpioData = ""
print("payload : "..#payload);
for i = 1, #payload do
print(i);
end
end)
conn:connect(80,"server.co.in")
conn:on("connection", function(conn, payload)
print("Server Connected, sending event")
conn:send("GET /mypage?id=deadbeef HTTP/1.1 200 OK\r\nHost: server.co.in\r\nConnection: keep-alive\r\nAccept: */*\r\n\r\n") end)
conn:on("sent",function(conn)
print("Closing server connection")
conn:close()
end)
end
I'm using NodeMCU Lua, and guess will be same even if i use Arduino framework.
NodeMCU custom build by frightanic.com
branch: master
commit: 22e1adc4b06c931797539b986c85e229e5942a5f
SSL: false
modules: adc,bit,cjson,file,gpio,http,i2c,mdns,mqtt,net,node,ow,struct,tmr,uart,websocket,wifi
build built on: 2017-05-03 11:24
powered by Lua 5.1.4 on SDK 2.0.0(656edbf)
I'm able to see all requests on my server which means server request code is ok, but payload/response is coming out blank.
Output is complete blank...
Please help.

conn:send("GET /mypage?id=deadbeef HTTP/1.1 200 OK\r\nHost: server.co.in\r\nConnection: keep-alive\r\nAccept: */*\r\n\r\n") end)
This is not a valid HTTP request. It looks more like a mix of HTTP request and HTTP response. The server might simply close the connection because it does not understand this. A valid HTTP request would look like this:
GET /mypage?id=deadbeef HTTP/1.1\r\n
Host: ...\r\n
\r\n
Apart from that you are using HTTP/1.1 and even explicitly set that you want to use HTTP persistent connections (Connection: keep-alive) although this behavior is implicit with HTTP/1.1 anyway. Because of this you cannot expect that the response will be followed immediately by a connection close like you currently do. Also, because of HTTP/1.1 you need to deal with HTTP chunked encoding.
The easiest way to avoid this complexity is to use HTTP/1.0 instead and not use a Connection header or set it explicitly to close instead. If you instead really want to handle the complexity please study the standard carefully.

Related

HTTP Raw request default is throwing Response message:java.net.SocketTimeoutException: Timeout exceeded while reading from socket

I sent a HTTP Raw request default with a host and port which I'm certain that I could connect using my windows 10 device.(Tested with Test-NetConnection and sent data via Ubuntu App, echo "DEMO Message" | nc mYHOST 6021)
However, when I sent the request in Jmeter , I get below errors.
Sampler result is as per the below.
Thread Name:Thread Group 1-1
Sample Start:2021-09-17 15:55:08 IST
Load time:9300
Connect Time:0
Latency:0
Size in bytes:921
Sent bytes:0
Headers size in bytes:0
Body size in bytes:921
Sample Count:1
Error Count:1
Data type ("text"|"bin"|""):text
Response code:500
Response message:java.net.SocketTimeoutException: Timeout exceeded while reading from socket
SampleResult fields:
ContentType:
DataEncoding: null
Response is as per the below.
java.net.SocketTimeoutException: Timeout exceeded while reading from socket
java.net.SocketTimeoutException: Timeout exceeded while reading from socket
at kg.apc.io.SocketChannelWithTimeouts.read(SocketChannelWithTimeouts.java:133)
at kg.apc.jmeter.samplers.HTTPRawSampler.readResponse(HTTPRawSampler.java:64)
at kg.apc.jmeter.samplers.HTTPRawSampler.processIO(HTTPRawSampler.java:163)
at kg.apc.jmeter.samplers.AbstractIPSampler.sample(AbstractIPSampler.java:112)
at kg.apc.jmeter.samplers.HTTPRawSampler.sample(HTTPRawSampler.java:42)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:630)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.base/java.lang.Thread.run(Thread.java:830)
This demo host and port is working though and I used exact same in my real request too.
Similar issues come when I use JSR223 Sampler with below code.
def client = new Socket('DemoHostname', 9000);
client.setSoTimeout(2000);
client << "Hello ";
client.withStreams { input, output ->
def reader = input.newReader()
def response = reader.readLine()
log.info('Response = ' + response)
}
client.close()
Also with TCP Sampler of Jmter.
So based on the suggestion, I sent it as a Http request to websocket of Logstash.
Good thing is add the record get added, but Jmter http request is running in an endless state. Seems since logstash communicate via TCP it's unable to provide a http response.
What are you trying to achieve by sending HELLO TEST to echo.websocket.org?
If you want to send a HTTP Request there, you need to do something like:
GET / HTTP/1.1
Host: echo.websocket.org
Connection: close
Demo:
If you need to load test a WebSocket server - you're using the wrong plugin, you should go for JMeter WebSocket Samplers instead and consider using more "alive" endpoint as this echo.websocket.org doesn't seem to be working anymore.

Why am I always getting a content length of 0 using HTTP GET request

I have a Telit HE910 cellular modem communicating over UART to AVR. I am issuing the AT command for HTTP GET to a server. According to the modem datasheet, I should get three carrots (<<<) and then the data stream from the server. I get the three carrots but no data. The HTTP response code is 200 (ok) with a content length of 0. On the server side, I am logging the request so I can verify the GET request is hitting the server. So on both the server and the modem, I get OK's but no data. I can perform a GET request to the server from another web page and it works fine. Does anyone have any ideas of what could cause this?
PHP page is just echoing strings out on the request. For example
<php?
header('Content-type: text/plain');
echo 'Test';
?>
AVR / Modem code
sprintf(buf, "#HTTPCFG=0,\"%s\",80,0,,,0,120,1", host);
AT_ASSERT(AT_OK == AT_SendCommand(buf, response));
sprintf(buf, "#HTTPQRY=%d,%d,%s%c",
get_profile(),
0,
page,
CR);
AT_ASSERT(AT_OK == AT_SendCommand(buf, response));
sprintf(buf, "AT#HTTPRCV=0%c", CR);
UART0_TxString(buf);
c = UART0_RxChar();
response[0]=c;
i=0;
while((c=UART0_RxChar()) != CR)
{
response[i++]=c;
}
UART1_Flush();
UART1_Printf("received: %s\n", response);
Response in Terminal
received:
<<<
HTTP POST Complete
There should be data behind the <<< according to datasheet
Okay, it turns out that nothing was wrong with the C code on the AVR. I found a test server for HTTP GET request and tested the module successfully on a that site. So the underlying issue is with the PHP file in responding to the HTTP GET request.

Netty http server responses

This is probably simple, but I couldn't figure it out. My Netty 4 based http server is causing http clients to hang on its response. It manages to send through its response payload (as observed using curl as a client) but the clients seem not to realize that the response has finished and they indefinitely wait for it to complete. Observed using curl, as well as firefox and chrome.
Only if I modify the code to close the channel (channel.close, as seen inline below), then do the clients acknowledge that the response is done. Otherwise, they just continue waiting for it to complete. I wish for the channel to stay open so that the next client request will not require opening a new connection (I wish to have keep-alive behavior), so closing the channel doesn't seem plausible. So I'm not sure how should the server mark the response as over - without closing the connection.
The server code:
val response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK)
val buf = new StringBuilder
buf.append("hello")
response.data.writeBytes(Unpooled.copiedBuffer(buf, CharsetUtil.UTF_8))
ctx.write(response).addListener(new ChannelFutureListener(){
def operationComplete(channelFuture: ChannelFuture){
if (channelFuture.isSuccess){
println("server write finished successfully")
//channelFuture.channel.close <===== if uncommented, clients receive the response, otherwise they just keep waiting forever
}
else
println ("server write failed: " + channelFuture.cause + "\n" + channelFuture.cause.getStackTraceString)
}
})
What am I missing??
You need a Content-Length header, or else the client won't know when to stop reading, and will continually poll for more data.

Keeping socket open after HTTP request/response to Node.js server

To support a protocol (Icecast Source Protocol) based on HTTP, I need to be able to use a socket from Node.js's http.Server once the HTTP request is finished. A sample request looks like this:
Client->Server: GET / HTTP/1.0
Client->Server: Some-Headers:header_value
Client->Server:
Server->Client: HTTP/1.0 200 OK
Server->Client:
Client->Server: <insert stream of binary data here>
This is to support the source of an internet radio stream, the source of the stream data being the client in this case.
Is there any way I can use Node.js's built in http.Server? I have tried this:
this.server = http.createServer(function (req, res) {
console.log('connection!');
res.writeHead(200, {test: 'woot!'});
res.write('test');
res.write('test2');
req.connection.on('data', function (data) {
console.log(data);
});
}).listen(1337, '127.0.0.1');
If I telnet into port 1337 and make a request, I am able to see the first couple characters of what I type on the server console window, but then the server closes the connection. Ideally, I'd keep that socket open indefinitely, and take the HTTP part out of the loop once the initial request is made.
Is this possible with the stock http.Server class?
Since it is reporting HTTP/1.0 as the protocol version the server is probably closing the connection. If your client is something you have control over, you might want to try to set the keep alive header (Connection: Keep-Alive is the right one I think).
My solution to this problem was to reinvent the wheel and write my own HTTP-ish server. Not perfect, but it works. Hopefully the innards of some of these stock Node.js classes will be exposed some day.
I was in a similar situation, here's how I got it to work:
http.createServer(function(res, req){
// Prepare the response headers
res.writeHead(200);
// Flush the headers to socket
res._send('');
// Inform http.serverResponse instance that we've sent the headers
res._headerSent = true;
}).listen(1234);
The socket will now remain open, as no http.serverResponse.end() has been called, but the headers have been flushed.
If you want to send response data (not that you'll need to for an Icecast source connection), simply:
res.write(buffer_or_string);
res._send('');
When closing the connection just call res.end().
I have successfully streamed MP3 data using this method, but haven't tested it under stress.

Why recv() returns '0' bytes at all for-loop iterations except the first one?

I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.

Resources