Node-red tcp bytes being dropped - tcp

I'm trying to send 20480 bytes (5120 int32s) every 100ms via tcp request node to my python socket server running on localhost.
Somehow my server is only receiving bytes at ~1/10th this rate. I haven't worked with binary data in JS before, and never used node red, so likely I'm doing something wrong.
I added a debug node to confirm msg.payload.length is 20480 and approx ten messages per second are generated.
From my function node:
var abuff = new ArrayBuffer(5120*4)
var tmpArr = new Int32Array(abuff);
for (var i=0;i<5120;i++)
{
//fill tmpArr
}
msg.payload=Buffer.from(abuff)
return msg;
I've inspected some of the bytes from the Python side and they appear correct there's just not enough of them.
Why is my data rate incorrect?

Related

When does a broken pipe occur in a TCP stream?

I am trying to write an echo server in Rust.
use std::net::{TcpStream, TcpListener};
use std::io::prelude::*;
fn main() {
let listener = TcpListener::bind("0.0.0.0:8000").unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("A connection established");
handle_connection(stream);
}
}
fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 512];
stream.read(&mut buffer).unwrap();
println!("Request: {}", String::from_utf8_lossy(&buffer[..]));
stream.write(&buffer[..]).unwrap();
stream.flush().unwrap();
}
The first request with nc localhost 8000 is working as expected but subsequent request aren't. What am I doing wrong? Is the problem in how the server is reading requests from clients? Though there is no error server side.
I am sending data by typing them on the terminal:
$ nc localhost 8000
hi
hi
hello
# no response
# on pressing enter
Ncat: Broken pipe.
A 'Broken pipe' message happens when you write to a stream where the other end has been closed. In your example, your handle_connection routine reads a single buffer from the client, copies that back to the client, and then returns, which will close the stream. When you run netcat from the terminal like that, the terminal defaults to line buffering, so each line you type will be sent to the server as a single write.
The first line is sent, read by the server, echoed back, and then the server closes the connection. Netcat gets a second line, writes that to the socket, and gets a 'Broken pipe' because the server has closed the connection.
If you want your server to read multiple messages, you need to have your handle_connection routine loop, reading from the stream until it gets an EOF.

Servlet.doPost() only called after the complete request has been received by the server

Why isn't doPost() called immediately?
The client opens an HttpUrlConnection and starts posting data, regularly flushing the output buffer. It uses 10 seconds to complete the post.
I need my servlet to receive the post and start reading from the InputStream as soon as the first bytes are received. However, doPost() is only called after the post has completed.
How can this be fixed?
Does this mean that the web container is buffering the request? How can I stop it from doing that?
Client code:
HttpURLConnection con = (HttpURLConnection)url.openConnection();
con.setDoOutput(true);
con.setRequestMethod("POST");
OutputStream output = con.getOutputStream();
for(int i = 0; i < 10; i++) {
byte[] buffer = new byte[10000];
output.write(buffer, 0, buffer.length);
output.flush();
System.out.println("partial POST done." + DateTime.now());
Thread.sleep(1000);
}
output.close();
Servlet code:
protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException {
System.out.println("doPost() " + DateTime.now());
How the internet works? Before a request is sent from a browser to the server, the request is broken up into small pieces of data that are called packets.
Quoting from http://computer.howstuffworks.com/question525.htm :
Each packet carries the information that will help it get to its
destination -- the sender's IP address, the intended receiver's IP
address, something that tells the network how many packets this e-mail
message has been broken into and the number of this particular packet.
The packets carry the data in the protocols that the Internet uses:
Transmission Control Protocol/Internet Protocol (TCP/IP). Each packet
contains part of the body of your message. A typical packet contains
perhaps 1,000 or 1,500 bytes.
Each packet is then sent off to its destination by the best available
route -- a route that might be taken by all the other packets in the
message or by none of the other packets in the message. This makes the
network more efficient. First, the network can balance the load across
various pieces of equipment on a millisecond-by-millisecond basis.
Second, if there is a problem with one piece of equipment in the
network while a message is being transferred, packets can be routed
around the problem, ensuring the delivery of the entire message.
So, for the server to be able to understand what the client is asking for, all the packets must be received and then be assembled. So, what you are asking is practically impossible.

How to make TCP receive to process asynchronous(or no-echo) client?

I have write a simple HTTP server based on Webdis. Now I got a problem that while a client send HTTP request without receive response(AKA, only send, not receive response from server), the server will receive multiple HTTP requests, and this will cause the parse module fail(may be this is a bug in the parse module). If any fuzzy, comes some of my code:
/* client... */
int fd = connect_server();
while (1) {
send(fd, buf, sz);
continue; /* no receive.. */
}
/* server... */
/* some event trigger following code */
char buffer[4096]; /* a stack based receive buffer, buggy */
ret = recv(fd, buffer, sizeof(buffer));
while client send 10 HTTP request(less than 4096 bytes) during server's sleep(for debug), this the next receive will receive 10 request one a time, but the parser can not parse multiple request, this make all these request fail. If all these request larger than 4096, this will cut off one of them and still fail.
I have browsed Nginx source code, may it was callback designed(not blame), I haven't got its solution...
Is there any way to do following things:
How to control the recv call that only receive only one request a time? Or is there some TCP related mechanism that makes it possible to receive only one send request?
This has nothing to do with synchronousness or events, but with the streaming nature of such sockets. You will have to buffer previously received data and you will have to implement certain parts of HTTP in its entirety in order to be able to mark an incoming request as complete, after which you can release it from your buffer and start parsing it.
TCP is by definition stream-based, so you can never be guaranteed that message borders will be respeted. Strictly speaking, such a thing do not exist in TCP. However, using blocking-sockets and making sure that Nagle's algorithm is disabled, reduces the chance of each recv() containing more than one segment. Just for testing, you could also insert a sleep after each send(). You could also play around with TCP_CORK.
However, instead of hacking something together, I would recomend you implement receiving "properly", as you will have to do it at some point. For each recv-call, check if the buffer contains the end of an HTTP-request (\r\n\r\n), and then process.

Why recv() returns '0' bytes at all for-loop iterations except the first one?

I'm writing a small networking program in C++. Among other things it has to download twitter profile pictures. I have a list (stl::vector) of URLs. And I think that my next step is to create for-loop and send GET messages through the socket and save the pictures to different png-files. The problem is when I send the very first message, receive the answer segments and save png-data all things seems to be fine. But right at the next iteration the same message, sent through the same socket, produces 0 received bytes by recv() function. I solved the problem by adding a socket creation code to the cycle body, but I'm a bit confused with the socket concepts. It looks like when I send the message, the socket should be closed and recreated again to send next message to the same server (in order to get next image). Is this a right way of socket's networking programming or it is possible to receive several HTTP response messages through the same socket?
Thanks in advance.
UPD: Here is the code with the loop where I create a socket.
// Get links from xml.
...
// Load images in cycle.
int i=0;
for (i=0; i<imageLinks.size(); i++)
{
// New socket is returned from serverConnect. Why do we need to create new at each iteration?
string srvAddr = "207.123.60.126";
int socketImg = serverConnect(srvAddr);
// Create a message.
...
string message = "GET " + relativePart;
message += " HTTP/1.1\r\n";
message += "Host: " + hostPart + "\r\n";
message += "\r\n";
// Send a message.
BufferArray tempImgBuffer = sendMessage(sockImg, message, false);
fstream pFile;
string name;
// Form the name.
...
pFile.open(name.c_str(), ios::app | ios::out | ios::in | ios::binary);
// Write the file contents.
...
pFile.close();
// Close the socket.
close(sockImg);
}
The other side is closing the connection. That's how HTTP/1.0 works. You can:
Make a different connection for each HTTP GET
Use HTTP/1.0 with the unofficial Connection: Keep-Alive
Use HTTP/1.1. In HTTP 1.1 all connections are considered persistent unless declared otherwise.
Obligatory xkcd link Server Attention Span
Wiki HTTP
The original version of HTTP
(HTTP/1.0) was revised in HTTP/1.1.
HTTP/1.0 uses a separate connection to
the same server for every
request-response transaction, while
HTTP/1.1 can reuse a connection
multiple times
HTTP in its original form (HTTP 1.0) is indeed a "one request per connection" protocol. Once you get the response back, the other side has probably closed the connection. There were unofficial mechanisms added to some implementations to support multiple requests per connection, but they were not standardized.
HTTP 1.1 turns this around. All connections are by default "persistent".
To use this, you need to add "HTTP/1.1" to the end of your request line. Instead of GET http://someurl/, do GET http://someurl/ HTTP/1.1. You'll also need to make sure you provide the "Host:" header when you do this.
Note well, however, that even some otherwise-compliant HTTP servers may not support persistent connections. Note also that the connection may in fact be dropped after very little delay, a certain number of requests, or just randomly. You must be prepared for this, and ready to re-connect and resume issuing your requests where you left off.
See also the HTTP 1.1 RFC.

Strange behavior using SO_SNDBUF on non-blocking TCP socket under windows

I'm trying to lower the send buffer size on my non-blocking TCP socket so that I can properly display an upload progress bar but I'm seeing some strange behavior.
I am creating a non-blocking TCP socketed, setting SO_SNDBUF to 1024, verifying that is is set properly, then connecting (tried this before and after call to connect with no difference).
The problem is, when my app actually comes around and calls send (sending about 2MB) rather than returning that around 1024 bytes were sent, the send call apparently accepts all the data and returns a sent value of 2 MB (exactly what I passed in). Everything operates properly (this is an HTTP PUT and i get a response, etc) but what I end up displaying in my progress bar is the upload sitting at 100% for about 30 seconds then the response coming in.
I have verified that if I stop before getting the response the upload does not complete so it's not like it just uploaded really fast and then the server stalled... Any ideas? Does windows even look at this setting?
Windows does look at this setting, but the setting is not working as you expect it to be.
When you're setting the size of those buffers, you're actually setting the size of the buffers on the actuall NIC you're communicating with, thus determining the size of the packets that are going out.
What you need to know about Windows, is that there is a buffer between your calling code and the actuall NIC, and I'm not sure that you can control the size of that. What happens if when you call the Send operation on your socket, you're dumping the data in that socket, and the Kernel of Windows will perform small step by step sends on the NIC using the data in the buffer.
This means that the code will actually report 2MB beeing 'sent', but this just means that your 2MB of data has been successfully written in the internal buffer, and it does not mean/guarantee that the data has already been sent.
I've been working on similar projects with video streaming and tcp communications, and this information is somewhere available on the MSDN Forums and technet, but it requires some really detailed searching on how it all actually works.
I observed the same thing on Windows, using Java non-blocking channel.
According to http://support.microsoft.com/kb/214397
If necessary, Winsock can buffer significantly more than the SO_SNDBUF buffer size.
This makes sense; the send is initiated by a program on local machine, which is presumed to be cooperative and not hostile. If kernel has enough memory, there's no point to reject the send data; some one must buffer it anyway. (Receive buffer is for the remote program, which may be hostile)
Kernel does have limits on this buffering of send data. I'm making a server socket, and kernel accepts at most 128K per send; not like 2MB in your example which is for a client socket.
Also according to the same article, kernel only buffer 2 sends; next non-blocking send should return immediately reporting 0 bytes written. So if we only send small amount of data each time, the program will be throttled by the receiving end, and your progress indicator would work nicely.
The setting does not affect anything on the NIC; it is the Kernel buffer that is affected. It defaults to 8k for both Send and Receive.
The reason for the behavior you are seeing is this: the send buffer size is NOT the limit of the amount you can sent at one time, it is the "nominal" buffer size. It really only affects subsequent sends when there is still data in the buffer waiting to be sent.
For example:
Set the send buffer to 101 bytes
Send 10 bytes, it will be buffered
Send 10 more bytes, it will be buffered
...continue until the buffer has 100 bytes in it
Send 10 more bytes
At this point WinSock uses some logic to determine whether to accept the new 10 bytes (and make the buffer 110 bytes) or block. I don't recall the behavior exactly but it is on MSDN.
Send 10 more bytes
This last one will definately block until some buffer space is available.
So, in essence, the send buffer is sizeable and:
WinSock will always accept a send of almost any size of the buffer is empty
If the buffer has data and a write will overflow, there is some logic to determine whether to accept/reject
If the buffer is full or overflowed, it will not accept the new send
Sorry for the vagueness and lack of links; I'm in a bit of a hurry but happened to remember these details from a network product I wrote a while back.

Resources