The Windows registry has a key TcpAckFrequency (default 2), which sets the number of TCP packets received before sending back an ACK packet. Also, there is a TcpDelAckTicks key defining the delay (default 2 i.e. 2 * 100 ms = 200 ms) after which an acknowledgment is sent anyway, even if TcpAckFrequency is not reached.
Is there any way to change (temporarily) these parameters using WinSock API or other APIs, but without modifying the registry?
Thank you very much in advance.
Related
As far as I understand from Is it possible to handle TCP flags with TCP socket? and what I've read so far, a server application does not handle and cannot access TCP flags at all.
And from what I've read in RFCs the PSH flag tells the receiving host's kernel to forward data from receive buffer to the application.
I've found this interesting read https://flylib.com/books/en/3.223.1.209/1/ and it mentions that "Today, however, most APIs don't provide a way for the application to tell its TCP to set the PUSH flag. Indeed, many implementors feel the need for the PUSH flag is outdated , and a good TCP implementation can determine when to set the flag by itself."
"Most Berkeley-derived implementations automatically set the PUSH flag if the data in the segment being sent empties the send buffer. This means we normally see the PUSH flag set for each application write, because data is usually sent when it's written. "
If my understanding is correct and TCPStack decides by itself using different conditions,etc. when to set the PSH flag, then what can I do if TCPStack doesn't set the PSH flag when it should?
I have a server application written in Java and client written in C, there are 1000 clients each on a separate host and they all connect to server. A mechanism which acts as a keep-alive involves server sending each 60 seconds a request to each client that requests some info. The response is always less than MTU(1500bytes) so all the time response frames should have PSH flag set.
It happened at some point that client was sending 50 replies to only one request and all of them with PSH flag not set. Buffer got full probably before the client even sent the 3rd or 4th time the same reply and receiving app thrown an exception because it received more data than it was expecting from receive buffer of host.
My question is, what can I do in such a situation if I cannot communicate at all with TCPStack?
P.S. - I know that client should not send more than 1 reply but still in normal operation all the replies have PSH flag set and in this certain situation they didn't, which is not application fault
im trying to send a large packet ( 9170 bytes ) using fwrite to a tcp server
fwrite($this->_socket, $data);
Problem is it send 8192 first then send the left 978 bytes
and i want to decrease the amount sent from 8192 to 1444 for each time it is sent
The TCP layer will do this, you don't have to. If you write 9,880 bytes and the server only tries to read 1,444 of them, it will get up to the first 1,444 bytes. The next time the server tries to read, it will get the next byte or bytes.
The client doesn't have to arrange its transmissions to meet the reception requirements of the server. The TCP layer's flow control will handle this automatically.
You're solving a non-problem.
According to fwrite documentation the optional third parameter of fwrite is length. It denontes maximum number of bytes that will be sent before end of string is reached. Won't this be the solution to your problem?
See the examples in comments below the documentation, they contain examples how to use fwrite with length.
$data is a string right ? You can split using substr() and then just keep sending.
In that case, just send 1444 bytes and wait for a user-level acknowledgement message from the the server. That will give the appearance of sending 1444 bytes at a time. It will also be painfully slow.
The root problem is that TCP is not capable of sending messages any longer than one byte - it streams bytes.
Add a protocol on top of TCP that can send messages.
There are several articles on the internet about how to make udp reliable. I have not been able to find one on c#. So maybe I can implement my algorithm.
from researching on the internet I believe udp has two problems:
it does not ensure that all data reaches it's destination.
data may reach it's destination on a different order
maybe there is a third problem that I am missing in order to make it reliable
if you are interested in knowing why I want to make udp reliable and why I don't use tcp instead take a look at this question. Believe me, I been trying to do tcp punch holing for so long.
anyways maybe there is already a library that I can use with c# that will enable me to do this. Because I have note been able to find a library I been thinking about the following algorithm:
"Imagine there is computer A and computer B and computer A is the one that is sending the file to computer B";
here are the steps that I been thinking of:
1) computer A opens the file for reading and let's say it is 5000 bytes. that means that computer A will have to send 5000 bytes to computer B making sure no bytes are lost and also in the right order.
2) computer A get's the first 500 bytes of the file and it get's the hash of those bytes. so now computer A has two things the hash of those 500 bytes and also the bytes. (the hash will be an efficient algorithm such as md5 to make sure data got received in the right order. that is md5(1,2,3) != md5(2,1,3))
3) imaging the hash of those first 500 bytes comes out to be kj82lkdi930fi1.
4) computer B should be listening for a hash and bytes.
5) computer A sends the hash to computer B. and it sends the 500 bytes too. as soon as it sends that it start's waiting for a reply.
6) computer B should now receive the hash and the bytes. computer b performs the same algorithm md5 on the received bytes. if that result is equal to the hash that was received then it replies back to A with {1,1,1,1,1,1} otherwise it replies with {2,2,2,2,2,2,2}
6.5) let's assume computer B got the data on the right order so it replies {1,1,1,1,1,} it also saves the hash code on memory or array.
7) computer A should be waiting for a response in order to send the next 500 bytes. let's say that it receives {1,1,1}. because it received a 1 it knows it can proceed and send the next 500 bytes with a new hash code of those 500 bytes.
8) computer A sends the next 500 bytes with its hash code.
9) let's pretend computer B did not receive the data so it does not reply back to A. computer B will still wait for bytes and a hash
8) since computer A has not receive a 1,1,1,1,1, or 2,2,2,2,2, for a reasonable amount of time, then A will send the same bytes and hash again for a second time.
9) let's assume computer B receives the hash and the bytes but the bytes got received on a different order. when computer B calculates the hash on those bytes then that hash will not match the hash that was received. as a result it will reply back with {2,2,2,2,2,2}
10) if computer A receives the 2,2,2,2,2,2 then it will send the same bytes and hash. if it did not receive the 2,2,2,2,2 for some reason then it will send the same bytes and hash after some period of time. let's pretend computer A receives 2,2,2,2,2
11) computer A sends the same bytes and hash for the 3th time.
12) computer B receives the hash and bytes on the right order. as a result it replies 1,1,1,1,1,1 and saves that previous hash on memory. (recall step 6.5)
13) lets pretend computer A did not receive the 1,1,1,1 response from B. It will then send the same bytes for the forth time.
14) computer B checks the hashes and if it is equal to the last one that was accepted then it replies 1,1,1,1 again without writing those bytes to the file.
15) the algorithm continues like so until the file get's transferred.
.
.
.
I mean there are obviously some other things that I need to add to this algorithm such as letting computer B know when the transfer is done. maybe checking for more errors. what happens if computer A get disconnected for a long time. But the main protocol will be something like the one I described.
So do you think I should start implementing this algorithm? should I increase and send more bytes every time. I mean send 1000 instead of 500? There are lots of articles on the internet that tell you about several techniques but very few of them give you a working example on the language that you want. In this case I need this in c#.
The third problem is that data can be corrupted when you receive it.
You can start by reading TCP RFC just to understand how TCP makes communication reliable.
Having that knowledge you can implement some of its techniques using UDP as transport.
Also take a look at this UDP network library http://code.google.com/p/lidgren-network-gen3/
so I have this real-time game, with a C++ sever with disabled nagle using SFML library , and client using asyncsocket, also disables nagle. I'm sending 30 packets every 1 second. There is no problem sending from the client to the server, but when sending from the server to the clients, some of the packets are migrating. For example, if I'm sending "a" and "b" in completly different packets, the client reads it as "ab". It's happens just once a time, but it makes a real problem in the game.
So what should I do? How can I solve that? Maybe it's something in the server? Maybe OS settings?
To be clear: I AM NOT using nagle but I still have this problem. I disabled in both client and server.
For example, if I'm sending "a" and "b" in completly different packets, the client reads it as "ab". It's happens just once a time, but it makes a real problem in the game.
I think you have lost sight of the fundamental nature of TCP: it is a stream protocol, not a packet protocol. TCP neither respects nor preserves the sender's data boundaries. To put it another way, TCP is free to combine (or split!) the "packets" you send, and present them on the receiver any way its wants. The only restriction that TCP honors is this: if a byte is delivered, it will be delivered in the same order in which it was sent. (And nothing about Nagle changes this.)
So, if you invoke send (or write) on the server twice, sending these six bytes:
"packet" 1: A B C
"packet" 2: D E F
Your client side might recv (or read) any of these sequences of bytes:
ABC / DEF
ABCDEF
AB / CD / EF
If your application requires knowledge of the boundaries between the sender's writes, then it is your responsibility to preserve and transmit that information.
As others have said, there are many ways to go about that. You could, for example, send a newline after each quantum of information. This is (in part) how HTTP, FTP, and SMTP work.
You could send the packet length along with the data. The generalized form for this is called TLV, for "Type, Length, Value". Send a fixed-length type field, a fixed-length length field, and then an arbitrary-length value. This way you know when you have read the entire value and are ready for the next TLV.
You could arrange that every packet you send is identical in length.
I suppose there are other solutions, and I suppose that you can think of them on your own. But first you have to realize this: TCP can and will merge or break your application packets. You can rely upon the order of the bytes' delivery, but nothing else.
You have to disable Nagle in both peers. You might want to find a different protocol that's record-based such as SCTP.
EDIT2
Since you are asking for a protocol here's how I would do it:
Define a header for the message. Let's say I would pick a 32 bits header.
Header:
MSG Length: 16b
Version: 8b
Type: 8b
Then the real message comes in, having MSG Length bytes.
So now that I have a format, how would I handle things ?
Server
When I write a message, I prepend the control information (the length is the most important, really) and send the whole thing. Having NODELAY enabled or not makes no difference.
Client
I continuously receive stuff from the server, right ? So I have to do some sort of read.
Read bytes from the server. Any amount can arrive. Keep reading until you've got at least 4 bytes.
Once you have these 4 bytes, interpret them as the header and extract the MSG Length
Keep reading until you've got at least MSG Length bytes. Now you've got your message and can process it
This works regardless of TCP options (such as NODELAY), MTU restrictions, etc.
I am reading 《Internetworking with TCP/IP》 by Douglas Comer,and when talking about creating a tcp connection ,there is a problem:
Suppose an implementation of TCP use
initial sequence number 1 when it
creates a connection,Explain how a
system crash and restart can confuse a
remote system into believing that the
old connection remained open.
I can't figure out why,please help me,Thanks.
Consider why a connection may get duplicate sequence numbers normally.
Then consider how the receiving system would handle a packet with a "duplicate" sequence number (because the transmitting system started reusing sequence numbers in packets it is using try to re-establish a connection).
*Edit: *
OP says:
but when re-establish the connection,the transmitting system will send a segment with SYN code bit set(and the sequence number be set 1 of course),won't that(SYN code bit set) inform the receiving system it is a new connection trying to be established ?see wiki for Transmission_Control_Protocol,it says that "Only the first packet sent from each end should have this flag(SYN) set."
But packets get lost and delayed and arrive out of order. You can't simply say everything arriving after the packet with the SYN flag is new. Lets say some of the old packets are delayed and arrive after establishment of a new connection. How do you distinguish whether a packet with sequence number #10 is from the old connection or new one? The worse case scenario is that it's from the old connection and the receiving system accepts it as from the new connection. When the real new connection packet #10 arrives, it's ignored as an unnecessary retranmission. The stream is corrupted without any indication of it.
http://www.tcpipguide.com/free/t_TCPConnectionEstablishmentSequenceNumberSynchroniz.htm
... The problem with starting off each connection with a sequence number of 1 is that it introduces the possibility of segments from different connections getting mixed up. Suppose we established a TCP connection and sent a segment containing bytes 1 through 30. However, there was a problem with the internetwork that caused this segment to be delayed, and eventually, the TCP connection itself to be terminated. We then started up a new connection and again used a starting sequence number of 1. As soon as this new connection was started, however, the old segment with bytes labeled 1 to 30 showed up. The other device would erroneously think those bytes were part of the new connection.
... This is but one of several similar problems that can occur. ...
The other issue with a predictable initial sequence number, such as starting at 1 every time, is that the predictability presents a vulnerability:
A malicious person could write code to analyze ISNs and then predict the ISN of a subsequent TCP connection based on the ISNs used in earlier ones. This represents a security risk, which has been exploited in the past (such as in the case of the famous Mitnick attack). To defeat this, implementations now use a random number in their ISN selection process.
Mitnick attack - http://www.cas.mcmaster.ca/wiki/index.php/The_Mitnick_attack
It's far worse than that though anyway - being predictable with sequence numbers makes spoofing and injecting an order of magnitude easier
After restart, if the first TCP connection is towards the same remote system, and since the sequence number will again be 1 - consider what that will cause.