Say I am capturing data from TCP using RECV function in c++.
I might sound stupid but I would like to know will I get any speed up if I capture the packet through a simple sniffer (maybe using PCAP) and process it?
Thanks
No, it probably won't speed up anything. I rather expect it to be even slower and more memory-consuming. (overhead, overhead, overhead...).
Additionally, it won't work at all.
No payload will be exchanged if there isn´t a real client
which creates a proper connection with the peer.
If there is a connection and you´re relying only on the sniffer without proper receiving the payload in the client, the whole transfer will stop after some amount of data. (Because the buffer is full, and the sender won't send anymore until there is space again).
That means you must call recv, which makes sniffing useless in the first place.
Related
Web games are forced to use tcp.
But with real time constraints tcp head of line blocking behavior is absurd when you don't care about old packets.
While I'm aware that there's definitely nothing that we can do on the client side, I'm wondering if there is a solution on the server side.
Indeed, on the server you get packets in order and miserably wait if misbehaving packet t+42 has been lost even though packets t+43, t+44 can already be nicely waiting in your receive buffer.
Since we are talking about local data, technically it should be possible to retrieve it..
So does anyone have an idea on how to perform that feat?
How to save this precious data from these pesky kernel space daemons?
TCP guarantees that the data arrives in order and re-transmits lost packets. TCP Man Page
Given this, there is only one way to achieve the results you want given your stated constraints, and that is to hack the TCP protocol at the server side (assuming you cannot control the Client WebSocket behavior). The simplest, relative term, would be to open a raw socket, implement your own simple TCP handshake (Syn-Ack when client Syns), then read and write from the socket managing your own TCP headers. Your custom implementation would need to keep track of received sequence numbers and acknowledge all of those you want the client to forget about.
You might be able to reduce effort by making this program a proxy to your original.
Example of TCP raw socket here.
I have been testing a program which has simple communication between two machines over a 1Gbps line. While running TCP communications over the line I occasionally receive write errors on the client side (due to a timeout) when the network is totally flooded (running at or close to 100% usage). This generally happens when I am running multiple instances of the same program going to different ports.
My question is, is it possible to get a write error but still receive the message on the server side. It appears that is what is happening, and I am not quite sure why. Could it be that the ACK coming back to the client is what is timing out?
Yes, that is possible. TCP does not guarantee you that data you sent successfully is received and that data that is sent unsuccessfully is not received. This problem is unsolvable. It is called the Generals Problem. There is always a way to loose messages/packets such that the sender comes to the wrong conclusion. TCP guarantees that the receiver receives the same stream of bytes that the sender sent, but possibly cut off at an arbitrary point.
This unreliability has performance reasons, too. TCP data is buffered on both hosts as well as on the network. Acknowledgement is delayed.
You have to live with this. If you make your scenario more concrete I can suggest some strategies of dealing with this.
send puts data into the TCP send buffer.
If the send buffer has no enough space, send will block util the data is completely or partly copied into the send buffer, or the designed timeout arrives.
Read timeout and write timeout is OK. You should check and process them. The way is restarting read/write operation after timeout. You also pay attention to other read/write error except timeout.
I'm not sure if this is the correct place to ask, so forgive me if it isn't.
I'm writing computer monitoring software that needs to connect to a server. The server may send out relatively urgent messages, such as sound or cancel an alarm, and the client may send out data about the computer, such as screenshots. The data that the client sends isn't too critical on timing, but shouldn't be more than a two minutes late.
It is essential to the software that portforwarding need not be set up, and it is assumed that the internet connection will be done through a wireless router that has NAT almost all the time.
My idea is to have a TCP connection initiated from the client, and use that to transfer data. Ideally, I would have no data being sent when it is not needed, but I believe this to be impossible. Would sending the equivalent of a ping every now and again keep the connection alive, and what sort of bandwidth would it use if this program was running all the time on the computer? In addition, would it be possible to reduce the header size for these keep-alives?
Before I start designing the communication and programming, is this plan for connection flawed? Are there better alternatives?
Thanks!
1) You do not need to send 'ping' data to keep the connection alive, the TCP stack does this automatically; one reason for sending 'ping' data would be to detect a connection close on the client side - typically you only find out something has gone wrong when you try and read/write from the socket. There may be a way to change various time-outs so you can detect this condition faster.
2) In general while TCP provides a stream-oriented error free channel, it makes no guarantees about timeliness, if you are using it on the internet it is even more unpredictable.
3) For applications such as this (I hope you are making it for ethical purposes) - I would tend to use TCP, since you don't want a situation where the client receives a packet to raise an alarm but misses that one that turns it off again.
In most descriptions of the TCP PUSH function, it is mentioned that the PUSH feature not only requires the sender to send the data immediately (without waiting for its buffer to fill), but also requires that the data be pushed to receiving application on the receiver side, without being buffered.
What I dont understand is why would TCP buffer data on receiving side at all? After all, TCP segments travel in IP datagrams, which are processed in their entirety (ie IP layer delivers only an entire segment to TCP layer after doing any necessary reassembly of fragments of the IP datagram which carried any given segment). Then, why would the receiving TCP layer wait to deliver this data to its application? One case could be if the application were not reading the data at that point in time. But then, if that is the case, then forcibly pushing the data to the application is anyway not possible. Thus, my question is, why does PUSH feature need to dictate anything about receiver side behavior? Given that an application is reading data at the time a segment arrives, that segment should anyway be delivered to the application straightaway.
Can anyone please help resolve my doubt?
TCP must buffer received data because it doesn't know when the application is going to actually read the data and it has told the sender that it is willing to receive (the available "window"). All this data gets stored in the "receive window" until such time as it gets read out by the application.
Once the application reads the data, it drops the data from the receive window and increases the size it reports back to the sender with the next ACK. If this window did not exist, then the sender would have to hold off sending until the receiver told it to go ahead which it could not do until the application issued a read. That would add a full round-trip-delay worth of latency to every read call, if not more.
Most modern implementations also make use of this buffer to keep out-of-order packets received so that the sender can retransmit only the lost ones rather than everything after it as well.
The PSH bit is not generally used acted upon. Yes, implementations send it but it typically doesn't change the behavior of the receiving end.
Note that, although the other comments are correct (the PSH bit doesn't impact application behaviour much at all in most implementations), it's still used by TCP to determine ACK behaviour. Specifically, when the PSH bit is set, the receiving TCP will ACK immediately instead of using delayed ACKs. Minor detail ;)
I'm writing a program using Java non-blocking socket and TCP. I understand that TCP is a stream protocol but the underlayer IP protocol uses packets. When I call SocketChannel.read(ByteBuffer dst), will I always get the whole content of IP packets? or it may end at any position in the middle of a packet?
This matters because I'm trying to send individual messages through the channel, each messages are small enough to be sent within a single IP packet without being fragmented. It would be cool if I can always get a whole message by calling read() on the receiver side, otherwise I have to implement some method to re-assembly the messages.
Edit: assume that, on the sender side, messages are sent with a long interval(like 1 second), so they aren't going to group together in one IP packet. On the receiver side, the buffer used to call read(ByteBuffer dst) is big enough to hold any message.
TCP is a stream of bytes. Each read will receive between 1 and the maximum of the buffer size that you supplied and the number of bytes that are available to read at that time.
TCP knows nothing of your concept of messages. Each send by client can result in 0 or more reads being required at the other end. Zero or more because you might get a single read that returns more than one of your 'messages'.
You should ALWAYS write your read code such that it can deal with your message framing and either reassemble partial messages or split multiple ones.
You may find that if you don't bother with this complexity then your code will seem to 'work' most of the time, don't rely on that. As soon as you are running on a busy network or across the internet, or as soon as you increase the size of your messages you WILL be bitten by your broken code.
I talk about TCP message framing some more here: http://www.serverframework.com/asynchronousevents/2010/10/message-framing-a-length-prefixed-packet-echo-server.html and here: http://www.serverframework.com/asynchronousevents/2010/10/more-complex-message-framing.html though it's in terms of a C++ implementation so it may or may not be of interest to you.
The socket API makes no guarantee that send() and recv() calls correlate to datagrams for TCP sockets. On the sending side, things may get regrouped already, e.g. the system may defer sending one datagram to see whether the application has more data; on the receiving side, a read call may retrieve data from multiple datagrams, or a partial datagram if the size specified by the caller is requires breaking packet.
IOW, the TCP socket API assumes you have a stream of bytes, not a sequence of packets. You need make sure you keep calling read() until you have enough bytes for a request.
From the SocketChannel documentation:
A socket channel in non-blocking mode, for example, cannot read
any more bytes than are immediately available from the socket's input buffer;
So if your destination buffer is large enough, you are supposed to be able to consume the whole data in the socket's input buffer.