I'm currently working on a networking library, but I don't know which way I should create it. The library should be designed to be used with games. Both reliable and unreliable packets are needed. Should I use "TCP and UDP", "UDP and SCTP", "UDP and create an RUDP protocol" or "RAW and build everything from ground up"? This is a long question that kept me struggling to much! I think creating a "robust" RUDP protocol is the best solution, but can I make a robust one?(extra work isn't a problem)
Thanks for your time.
Each one of those is made for a reason. TCP for kinda slow reliable connections, UDP for fast unreliable connections, STCP is not commonly used so it's not certainly stable(not preferred), RUDP best used when both reliable and unreliable messages are needed and RAW is mostly used by organizations trying to build their own networking system.
Related
Suppose I have two separated Go programs running in my localhost, is TCP the best method for transferring data between the two programs in terms of performance?
The short answer is no. The TCP/IP stack is slow, especially the TCP part. So in terms of performance you better use local inter-process communication methods, like a shared memory between your applications or Unix sockets.
If you MUST use a network stack to communicate (say, you plan to move applications between hosts), then UDP or raw sockets are the best options in terms of performance.
And only if you:
must use a network and
you need a reliable communication channel, then TCP is a good option.
So just walk through your requirements and decide if it is a best method for you.
It would work and give additional freedom to have the two programs run on different computers. But it is not the best in terms of performance.
For good performance, shared memory comes to mind.
https://en.wikipedia.org/wiki/Shared_memory
Maybe you could describe a bit more what exactly you want to do.
I don't have much experience with network programming, but an interesting problem came up that requires it. The server will be transmitting multiple streams of different types of data to other machines. Each machine should be able choose which of the streams (one or more) it will like to receive. The whole setup is confined to the local network only. Initially, there will be only two clients, but I would like to design a scalable approach, if possible.
The existing server code, which is streaming only a single stream, is using TCP streaming socket for doing so. However, from some reading on the subject, I am not sure if this approach will scale to multiple streams and multiple clients well. The reason is: wouldn't two clients, who want to receive the same stream but connect via different TCP sockets, result in wastage of bandwidth? Especially compared to UDP, which allows to multicast.
Due to my inexperience, I am relying on better informed people out there to advise me: considering that i do want the stream to be reliable, would it be worth it to start from the scratch with UDP, and implement reliability into it, than to keep using TCP? Or, will this be better solved by designing an appropriate network structure? I'd be happy to provide more details if needed. Thanks.
UPDATE: I am looking at PGM and emcaster for reliable multicasting at the moment. Must have C# implementations at server side, and python implementations at client side.
Since you want a scalable program, then UDP would be a better choice, because it does not go the extra length to verify that the data has been received, thus making the process of sending data faster.
I'm not satisfied with some algorithms of TCP, and i know it's not possible to implement TCP in UDP. But i want to make a compatible layer with others server which relies on traditional TCP.
So i ask, can i manipulate IP directly on Linux or other *nix OS?
I know udt and other similar projects. I just need to keep the compatibility so that i don't need to do much works for so large amount of servers.
If you want to remain compatible with other endpoints wich implement standard TCP then I assume you want to use the same protocol on the wire and make incremental improvements to it.
Your kernel's existing TCP implementation is in the kernel. If you want to improve it, I would say you had better make changes there rather than reinvent it. If you want to reinvent it and implement a whole TCP stack in userspace, then, sure, you can do it, but it's going to be A LOT OF WORK.
So I'm writing a fairly simple game with very low networking requirements, I'm using TCP.
I'm unsure where to start in even defining/implementing a protocol for the client and server to use. I've been looking around and I've seen a few examples, for instance Mojang's Minecraft which uses a table of 'commands' the client sends the server and the server sends the client, with numbers of arguments and such.
What's a good way to do this? I've heard complaints about Minecraft's protocol because if you overread by a byte you ruin the entire stream.
Game networking is a broad question, depending on what type of problem you are solving. TCP (may) not even be the correct choice for you.
For example - games that send movement of characters is typically done with UDP. The reason being that character movement isn't critical to the operation of the game, so some data loss of movement is "acceptable". That may be why sometimes your character "jumps" - some UDP packets were lost, or severely out-of-order.
UDP is argued as the preferred protocol for networked games. So before you even get started, carefully consider whether you are even picking the correct protocol.
Overall, I consider Glenn Fiedler's series on developing a networked game a fantastic read. I'd start here. He covers all of the basics of using UDP for gaming.
If you want to use TCP simply just to get a handle on TCP - then Minecraft is a reasonable example. A known list of commands that can be sent back and forth is a simple way to start. However, as you stated, is prone to some problems. This is more aligned with using the wrong protocol than how it was developed.
Google "game networking library" and you'll get a bunch of results. GNE would be a good one to look at.
I guess it depends on what your game is, what it mechanics are, what information is necessary. In any case I think this stack exchange https://gamedev.stackexchange.com/ is more suited to answer your question.
Gamedev.net's networking forum has a great FAQ covering these sorts of questions and many others, however, to make this more than a 'go-there-look-at-that' answer, I'll suggest some small improvements you can make. When using tcp, delivery is guarenteed, but this has a speed cost, which is fine if your not making a fps, but it means you need to get more from the data you do send, a great way to do this is via deltas/differentials, that is, sending only the change in state, not the entire game state, you can also validate your incoming packets for corrupt/anomalys data over and about tcp checks by predicting possibilities are allow, and with the same prediction, you can cut out even more data etc. But as others have said, this is a broad question, and not suited to getting truely helpful answers
As you're coding in lua, the only library anyone uses is luasocket (though ZMQ is gaining ground).
You're really going to have several protocols going: TCP for data that must be received (eg, server commands such as changemap or you_got_kicked, conversations and such; then use UDP for non-compulsory data, or data that quickly expires (eg, character positions).
After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a paper (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this?
The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).
I'd say go with TCP, unless you can't (because you have thousands of sensors, or the sensors very low energy budgets, or whatever). If you need reliability you'll have to roll your own reliability layer on top of UDP.
Try it out with TCP, and measure your performance. If it's OK, and you don't anticipate serious scaling issues, then just stay with TCP.
The article you link goes into detailed analysis on some corner cases. This probably does not apply in your situation. I would ignore this unless your own performance tests start showing problems. Start with the simplest setup (I'm guessing TCP for bulk data transfer and UDP for non-reliable sensor data), test, measure, find bottlenecks, re-factor.
The OP says:
... sending important data that can't be lost.
Therefore, TCP, by definition is the right answer over UDP.
Remember, the "U" in UDP stands for "unreliable"
Re:
The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).
Bad idea: things will tend to break at exactly the times that you don't expect them to. Your job, as an engineer, is to plan for the failure cases and mitigate them in advance. UDP will lose packets. If your data can't be lost, then don't use UDP.
I also would go with just TCP. UDP has its uses, and high-importance sensor data isn't really what comes to mind. If you can stand to lose plenty of sensor data, go with UDP, but I conjure that isn't what you want at all.
UDP is simpler protocol than TCP, and you can still simulate features of TCP using UDP. If you really have custom needs, UDP is easier to tweak.
However, I'd firstly just use both UDP and TCP, check their behavior in a real environment, and only then decide to reimplement TCP in terms of UDP in the exact way you need. Given proper abstraction, this should not be much work.
Maybe it would be enough for you to throttle your TCP usage not to fill up the bandwidth?
If you can't lose data, and you use UDP, you are reinventing TCP, at least a significant fraction of it. Whatever you gain in performance you are prone to lose in protocol design errors, as it is hard to design a protocol.
Constant sensor data: UDP. Important data that can't be lost: TCP.
You can implement your own mechanism to confirm the delivery of UDP packets that can't be lost.
I would say go with TCP. Also, if you're managing a lot of packet loss, the protocol of choice is your least concern. If you need important data, TCP. If the data is not important and can be supplemented later, UDP. If the data is mission-critical, TCP. UDP will be faster, but leave you with errors left and right from corrupt or non-existent packets. In the end, you'd be reinventing TCP to fix the problems.