Write bytes to a stram -what happen to the data? - tcp

if an application write bytes to a port , and no one reads the data from that port , what happen to the data ?
suppose one application write a char message to a port every minutes (the message has end tag) , and the application the should read from that port is down , will the messages will be lost ?

If the port is closed (over a TCP/IP or UDP/IP network), the packets will be dropped, and the data will be lost. If it's open, but the receiving application just never reads it in, it will sit in the buffer until the app terminates, at which point it is lost. With local IPC, say, a pipe, the data just sits around in the read buffer until the read end of the pipe is closed, at which point it is lost.
TL;DR: It's lost.

Related

How to read stream data from a TCP socket in Swift 2?

Let's suppose, I have a custom server that listens to connections on some port and once it has received a connection, it starts sending data (sort of a logger). Here's the first question:
Can it be just binary data? Actually, I need just two non-zero 8-bit values, and I was thinking of 0-value byte to separate each new portion of data.
These three bytes will be sent once or may be twice a second.
So, now I am looking for some code snippet in Swift 2 to properly read this data. Normally, I would expect calling
connectSocket(IP,port)
which would connect to the socket, and once it receives the first chunk of data,
socketCallBack()
is called, or something like that.
Intuitively, I don't like the idea of checking data in a while (true) loop. Or is this the proper way?
I've seen an example, when it first sends 'get' request to the server and immediately starts waiting for response. Probably, I can call it using a timer, once a second? Will it be correct?
What I am concerned about is trafic. Right now I have impemented it through a web-server, but I don't like that it spends way too much trafic for that overhead http data.
Probably, with that tcp connections on timer that would be much less, and it would save even more trafic if I establish just one connection in the beginning and transmit the data within this connection. Am I right?

TCP-Connection - keep up or reconnect?

I have a general question regarding TCP-IP communication...
for the time being I try to create a small communication between an ATMega and a Raspberry Pi. I will transmit some data for example every 5 minutes (e.g. 100 byte) via TCP/IP Protocol.
Does it make sense to keep the connection open or shall I create a new connection for each dataset?
Thanks for your help...
webbolle
I would lean towards keeping the TCP connection open rather than opening a new one everytime.
Here are a few reasons. First, by using the same connection, you would save on not having to send TCP handshake message (SYN-based messages) and teardown messages (FIN-based messages). In your case, if you are going to transmit 100 bytes every 5 minutes, the overhead of SYN/FIN messages might be more than that. Second, if you already have the connection open, then you would save on time since there is no need to do the reconnection. Third, TCP might go to slow-start every time you start the connection -- should not be a problem with 100 bytes, but if you need to send more bytes, then with every new connection, TCP would start its send window with 1 MSS. But, if you reuse an existing connection, TCP would (probably) use the current window.
Also:
An open connection doesn't consume any resources (bandwith etc.) except for the ports it holds on both devices. Basically every TCP-connection that has been opened and not been closed is still open, save unintended disconnections etc.
For detecting those is also doesn't make a difference wether you keep open or reopen:
If the connection dropped out in the meantime you'll receive the more or less same error.

Continuously write to TCP socket without reading

I have a TCP client server application written on boost 1.53. Based on the client command I have to start a server thread to write some data to a socket. But I have no guarantee that the client application would start reading from this socket.
Is there any trouble writing data to a socket without reading from it? Won't be there any socket overflow or data corruption ?
Looking forward to hearing your ideas.
Thx,
Dmitry
What happens when sending data to a slow or non-cooperative remote side is covered by the flow control aspect of TCP.
Suppose you try to send data and the application in the remote side refuses to read it. Eventually the remote side's receive window becomes full, and it will indicate this by sending an ACK with a window size 0. Your network stack stops trying send new packets until an ACK with a larger window size is received. If you keep trying to send data it accumulates in the send buffer in your network stack. When the buffer becomes full writing to your side of the socket blocks.
Using TCP, that won't be a problem. The server will detect that the client isn't reading, and hold off on sending more data until the client has acknowledged receipt of the already-sent data. In this case the server thread will block until the client is ready to accept more data.
Also TCP packets are checksummed, so if any get corrupted in transit the client will reject them and the server will resend them.

How does TCP connection terminate if one of the machine dies?

If a TCP connection is established between two hosts (A & B), and lets say host A has sent 5 octets to host B, and then the host B crashes (due to unknown reason).
The host A will wait for acknowledgments, but on not getting them, will resend octets and also reduce the sender window size.
This will repeat couple times till the window size shrinks to zero because of packet loss. My question is, what will happen next?
In this case, TCP eventually times out waiting for the ack's and return an error to the application. The application have to read/recv from the TCP socket to learn about that error, a subsequent write/send call will fail as well. Up till the point that TCP determined that the connection is gone, write/send calls will not fail, they'll succeed as seen from the application or block if the socket buffer is full.
In the case your host B vanishes after it has sent its ACKs, host A will not learn about that until it sends something to B, which will eventually also time out, or result in an ICMP error. (Typically the first write/send call will not fail as TCP will not fail the connection immediately, and keep in mind that write/send calls does not wait for ACKs until they complete).
Note also that retransmission does not reduce the window size.
Please follow this link
now a very simple answer to your question in my view is, The connection will be timed out and will be closed. another possibility that exists is that some ICMP error might be generated due to due un-responsive machine.
Also, if the crashed machine is online again, then the procedure described in the link i just pasted above will be observed.
Depends on the OS implementation. In short it will wait for ACK and resend packets until it times out. Then your connection will be torn down. To see exactly what happens in Linux look here other OSes follow similar algorithm.
in your case, A FIN will be generated (by the surviving node) and connection will eventually migrate to CLOSED state. If you keep grep-ing for netstat output on the destination ip address, you will watch the migration from ESTABLISHED state to TIMED_WAIT and then finally disappear.
In your case, this will happen since TCP keeps a timer to get the ACK for the packet it has sent. This timer is not long enough so detection will happen pretty quickly.
However, if the machine B dies after A gets ACK and after that A doesn't send anything, then the above timer can't detect the same event, however another timer (calls idle timeout) will detect that condition and connection will close then. This timeout period is high by default. But normally this is not the case, machine A will try to send stuff in between and will detect the error condition in send path.
In short, TCP is smart enough to close the connection by itself (and let application know about it) except for one case (Idle timeout: which by default is very high).
cforfun
In normal cases, each side terminating its end of the connectivity by sending a special message with a FIN(finish) bit set.
The device receiving this FIN responds with an acknowledgement to the FIN to indicate that it has been received.
The connection as a whole is not considered terminated until both the devices complete the shut down procedure by sending an FIN and receiving an acknowledgement.

Can TCP handle a stream which never ends in a single connection?

This is more of a theoretical question. Let us say that there is an infinite data source, which keeps pushing data every second. Some device which monitors "Solar events", and sends events to a back-end system continuously, every nanosecond ( to mean its a continuous stream ). And the back-end system wants to transmit the live data to another remote system over TCP. Can TCP handle the infinite data stream in a single TCP connection ?
I'm aware of the sequence number limitation, but with TCP timestamps, the sequence numbers will properly wrap around, and it should not pose a problem. Also, assume that the system has several terabytes of memory ( which can be considered close to an infinite memory model ). If I just give the base address of where the stream starts, will TCP able to proceed ( segmenting, transmitting, re-transmitting .. etc ) continuously in a single TCP connection, without bothering on whether the data ever ends ?
My guess is that since TCP never expects any stream length parameter, it should be possible. Am I right ?
Basically, yes. As long as the data is byte, ('octet'), aligned, data on TCP streams can be piped anywhere, (see any router). TCP comms is a byte stream - it doesn't care about message boundaries. The windowed protocol has built-in flow-control, so it should all work.

Resources