According to this Socket FAQ article, Nagle's algorithm is one of many algorithms that can cause a bunch of data to sit in the TCP buffer and not hit the wire. The delay from the Nagle algorithm can be up to 200ms.
For some reason, Nagle's algorithm can be turned off completely, but not flushed just once. This is really puzzling to me. Why is there no way to say that "just this one time, don't wait for any more data. Just act as if Nagle's 200ms are up."
Wouldn't that make perfect sense, and strike a good balance between no Nagle at all, Nagle all the time, and implementing one's own protocol from scratch?
Good question. I guess nobody ever really needed it or they got around it. If I remember correctly, enabling TCP_NODELAY pushes the data immediately. Then you can just disable it.
Of course, this comes at the high cost of two system calls for a "flush". What you could do: send(2), on Unix implementations has a flags argument. You could implement your own flag, something like: MSG_JUSTPUSHIT (okay, maybe another name) and consider it in tcp_output.
In performance-sensitive applications where the delays introduced by Nagle's algorithm are an issue, it's often easier to just disable Nagle's algorithm entirely and emulate its batching in software by using scatter/gather IO (e.g, writev(), or by implementing buffering in software where needed). As an added bonus, doing this cuts out some system call overhead.
Alternatively, you can open up two separate sockets and disable Nagling on one of them. Just keep in mind that data sent on one socket won't necessarily be synced up with the other one.
Related
Let's think about a circumstance like that:
In a MMORPG Game, We send a packet to server, and the server will do a lot of computing about all the palyers who have some connection like attack or heal or somethingelse. After that, we may receive several packets.
Since we may receive several packets, the thing is a little hard. If we only read one, then we can just use timestamp to see the time cost. But now, we cannot do that. So, how to evaluate the performance between traditional TCP/IP stack and the DPDK process in a complex circumstance like that?
If we only read one, then we can just use timestamp to see the time cost. But now, we cannot do that.
Answer> you can always register callback handler in RX, to get it invoked per packet.
how to evaluate the performance between traditional TCP/IP stack and the DPDK process in a complex circumstance like that?
Answer> I assume you are having TLDK or mTCP or ANS as userspace stack, your best approach is to have callback at each read success.
I'm in a situation where, logically, UDP would be the perfect choice (i need to be able to broadcast to hundreds of clients). This is in a very small and controlled environment (the whole network is over a few square metters, all devices are local, the network is way oversized with gigabit ethernet and switches everywhere).
Can i simply "ignore" all of the added reliability that needs to be tossed on udp (checking messages arrived, resending them etc) as those mostly apply where the is expected packet loss (the internet) or is it really suggested to handle udp as "may not arrive" even in such conditions?
I'm not asking for theorycrafting, really wondering if anyone could tell me from experience if i'm actually likely to have udp packets missing in such an environment or is it's going to be a really rare event as obviously sending things and assuming that worked is much simpler than handling all possible errors.
This is a matter of stochastics. Even in small local networks, packet losses will occur. Maybe they have an absolute probability of 1e-10 in a normal usage scenario. Maybe more, maybe less.
So, now comes real-world experience: Network controllers and Operating systems do have a tough live, if used in high-throughput scenarios. Worse applies to switches. So, if you're near the capacity of your network infrastructure, or your computational power, losses become far more likely.
So, in the end it's just a question on how high up in the networking stack you want to deal with errors: If you don't want to risk your application failing in 1 in 1e6 cases, you will need to add some flow/data integrity control; which really isn't that hard. If you can live with the fact that the average program has to be restarted every once in a while, well, that's error correction on user level...
Generally, I'd encourage you to not take risks. CPU power is just too cheap, and bandwidth, too, in most cases. Try ZeroMQ, which has broadcast communication models, and will ensure data integrity (and resend stuff if necessary), is available for practically all relevant languages, and runs on all relevant OSes, and is (at least from my perspective) easier to use than raw UDP sockets.
Here's my scenario:
In my application i have several processes which communicate with each other using Quickfix which internally use tcp sockets.the flow is like:
Process1 sends quickfix messaage-> process 2 sends quickfix message after processing message from
process 1 -> .....->process n
Similarly the acknowledgement messages flow like,
process n->....->process 1
Now, All of these processes except the last process( process n ) are on the same machine.
I googled and found that tcp sockets are the slowest of ipc mechanisms.
So, is there a way to transmit and recieve quick fix messages( obviously using their apis)
through other ipc mechanisms. If yes, i can then reduce the latency by using that ipc mechanism between all the processes which are on the same machine.
However if i do so, do those mechanisms guarentee the tranmission of complete message like tcp sockets do?
I think you are doing premature optimization, and I don't think that TCP will be your performance bottleneck. Your local LAN latency will be faster than that of your exterior FIX connection. From experience, I'd expect perf issues to originate in your app's message handling (perhaps due to accidental blocking in OnMessage() callbacks) rather than the IPC stuff going on afterward.
Advice: Write your communication component with an abstraction-layer interface so that later down the line you can swap out TCP for something else (e.g ActiveMQ, ZeroMQ, whatever else you may consider) if you decide you may need it.
Aside from that, just focus on making your system work correctly. Once you are sure teh behavior are correct (hopefully with tests to confirm them), then you can work on performance. Measure your performance before making any optimizations, and then measure again after you make "improvements". Don't trust your gut; get numbers.
Although it would be good to hear more details about the requirements associated with this question, I'd suggest looking at a shared memory solution. I'm assuming that you are running a server in a colocated facility with the trade matching engine and using high speed, kernel bypass communication for external communications. One of the issues with TCP is the user/kernel space transitions. I'd recommend considering user space shared memory for IPC and use a busy polling technique for synchronization rather than using synchronization mechanisms that might also involve kernel transitions.
I was running a benchmark on CouchDB when I noticed that even with large bulk inserts, running a few of them in parallel is almost twice as fast. I also know that web browsers use a number of parallel connections to speed up page loading.
What is the reason multiple connections are faster than one? They go over the same wire, or even to localhost.
How do I determine the ideal number of parallel requests? Is there a rule of thumb, like "threadpool size = # cores + 1"?
The gating factor is not the wire itself which, after all, runs pretty quick (ignoring router delays) but the software overhead at each end. Each physical transfer has to be set up, the data sent and stored, and then completely handled before anything can go the other way. So each connection is effectively synchronous, no matter what it claims to be at the socket level: one socket operating asynchronously is still moving data back and forth in a synchronous way because the software demands synchronicity.
A second connection can take advantage of the latency -- the dead time on the wire -- that arises from the software doing its thing for first connection. So, even though each connection is synchronous, multiple connections let things happen much faster. Things seem (but of course only seem) to happen in parallel.
You might want to take a look at RFC 2616, the HTTP spec. It will tell you about the interchanges that happen to get an HTTP connection going.
I can't say anything about optimal number of parallel requests, which is a matter between the browser and the server.
Each connection consume one own thread. Each thread, have a quantum for consume CPU, network and other resources. Mainly, CPU.
When you start a parallel call, thread will dispute CPU time and run things "at the same time".
It's a high level overview of the things. I suggest you to read about asynchronous calls and thread programming to understand it better.
[]'s,
And Past
So I'm writing a fairly simple game with very low networking requirements, I'm using TCP.
I'm unsure where to start in even defining/implementing a protocol for the client and server to use. I've been looking around and I've seen a few examples, for instance Mojang's Minecraft which uses a table of 'commands' the client sends the server and the server sends the client, with numbers of arguments and such.
What's a good way to do this? I've heard complaints about Minecraft's protocol because if you overread by a byte you ruin the entire stream.
Game networking is a broad question, depending on what type of problem you are solving. TCP (may) not even be the correct choice for you.
For example - games that send movement of characters is typically done with UDP. The reason being that character movement isn't critical to the operation of the game, so some data loss of movement is "acceptable". That may be why sometimes your character "jumps" - some UDP packets were lost, or severely out-of-order.
UDP is argued as the preferred protocol for networked games. So before you even get started, carefully consider whether you are even picking the correct protocol.
Overall, I consider Glenn Fiedler's series on developing a networked game a fantastic read. I'd start here. He covers all of the basics of using UDP for gaming.
If you want to use TCP simply just to get a handle on TCP - then Minecraft is a reasonable example. A known list of commands that can be sent back and forth is a simple way to start. However, as you stated, is prone to some problems. This is more aligned with using the wrong protocol than how it was developed.
Google "game networking library" and you'll get a bunch of results. GNE would be a good one to look at.
I guess it depends on what your game is, what it mechanics are, what information is necessary. In any case I think this stack exchange https://gamedev.stackexchange.com/ is more suited to answer your question.
Gamedev.net's networking forum has a great FAQ covering these sorts of questions and many others, however, to make this more than a 'go-there-look-at-that' answer, I'll suggest some small improvements you can make. When using tcp, delivery is guarenteed, but this has a speed cost, which is fine if your not making a fps, but it means you need to get more from the data you do send, a great way to do this is via deltas/differentials, that is, sending only the change in state, not the entire game state, you can also validate your incoming packets for corrupt/anomalys data over and about tcp checks by predicting possibilities are allow, and with the same prediction, you can cut out even more data etc. But as others have said, this is a broad question, and not suited to getting truely helpful answers
As you're coding in lua, the only library anyone uses is luasocket (though ZMQ is gaining ground).
You're really going to have several protocols going: TCP for data that must be received (eg, server commands such as changemap or you_got_kicked, conversations and such; then use UDP for non-compulsory data, or data that quickly expires (eg, character positions).