2D mmorpg platformer should it be ptp? - software-design

I am currently developing an mmorpg. It will be a live action game.
I need players to see each other in real time while walking and fighting.
Now I have a big question.
Should I make the connection to be PTP (player to player/peer to peer) or should it be player to server and server to players?
It will of course have a TCP connection to the server anyway in order to process important things like "trades", map switchings, etc...
However I am talking about the current position and animation of the players.
Should all that be sent directly to the other players? Have the server choose a player with good connection and make him to host the current "map"?
Sorry if this question is not to topic, I am new here.

Related

Stream instrumentation data lossless trough non-reliable 4G

I have some data aquisition devices in industrial machinery that have 4G connectivity. Right now I make them to stream the intrumentation data in real time to my server through raw TCP/IP protocol. But this has some problems:
The machinery sometimes work in places where there is low or null mobile connectivity. If there is no connectivity for too long it can happen two things: a) the machine gets shutted down and the tcp/ip buffer it's lost along with the instrumentation data or b) the tcp/ip buffer overflows, which has the same results.
The same as point 1, but for the server side, due to maintenance or if something in the server fails in the weekend when nobody is going to notice it but the machinery can be ON and working. Then we can have data loss in the same way as point 1.
I have to manage authentication and the connection of all the clients into a server single TCP port. I have done some temporary hack that works for the moment but isn't the best. But this is another problem and it's not the reason of this question, so take it only for context.
So, I should code an application layer acknowledge where the server tells the client when a high-level message (not the individual TCP packets) has been received and processed. And in the client side to have a buffer writted in-disk where data is being deleted as is being confirmed by the server. This, to solve points 1 & 2.
But I'm afraid that I'm reinventing the wheel or that I don't know the correct tools, because I think that this problem should be more or less common but I fail to google for it and I can't find a library or tool that does this job.
What I was thinking about is something that in the remote client is listenning in a local TCP port for incomming data from the DAQ software, once it receives a message it streams it to the server and writes it to the local disk. In the server, the tool receives the message and re-streams it over local network to the final server. Then, notifies the client that is able to delete the message from its disk buffer.
So, the question is, there is something already done? I would prefer an already compiled / language agnostic solution because I code in LabView and I know there isn't like that in its ecosystem, but I'm open to everything. If there isn't anything like that, any advice in what to do / to avoid when developing it myself?
Thanks for your time.

How should a game server receive udp packets with a defined tick rate?

I currently have a game server with a customizable tick rate, but for this example let's suggest that the server is only going to tick once per second or 1hz. I'm wondering what's the best way to handle incoming packets if the client send rate is faster than the server's as my current setup doesn't seem to work.
I have my udp blocking receive with a timeout inside my tick function, and it works, however if the client tick rate is higher than the server, all of the packets are not received; only the one that is being read at the current time. So essentially the server is missing packets being sent by clients. The image below demonstrates my issue.
So my question is, how is this done correctly? Is there a separate thread where packets are read constantly, queued up and then the queue is processed when the server ticks or is there a better way?
Image was taken from a video https://www.youtube.com/watch?v=KA43TocEAWs&t=7s but demonstrates exactly what I'm explaining
There's a bunch going on with the scenario you describe, but here's my guidance.
If you are running a server at 1hz, having a blocking socket prevent your main logic loop from running is not a great idea. There is a chance you won't be receiving messages at the rate you expect (due to packet loss, network lag spike or client app closing)
A) you certainly could create another thread, continue to make blocking recv/recvfrom calls, and then enqueue them onto a thread safe data structure
B) you could also just use a non-blocking socket, and keep reading packets until it returns -1. The OS will buffer (usually configurable) a certain number of incoming messages, until it starts dropping them if you aren't reading.
Either way is fine, however for individual game clients, I prefer the second simple approaches when knowing I'm on a thread that is servicing the socket at a reasonable rate (5hz seems pretty low, but may be appropriate for your game). If there's a chance you are stalling the servicing thread (level loading, etc), then go with the first approach, so you don't detect it as a disconnection if you miss sending/receiving a periodic keepalive message.
On the server side, if I'm planning on a large number of clients/data, I go to great lengths to efficiently read from sockets - using IO Completion Ports on Windows, or epoll() on Linux.
Your server could have to have a thread to tick every 5 seconds just like the client to receive all the packets. Anything not received during that tick would be dropped as the server was not listening for it. You can then pass the data over from the thread after 5 ticks to the server as one chunk. The more reliable option though is to set the server to 5hz just like the client and thread every packet that comes in from the client so that it does not lock up the main thread.
For example, if the client update rate is 20, and the server tick rate is 64, the client might as well be playing on a 20 tick server.

What are the general rules for getting the 104 "Connection reset by peer" error?

Are there any general rules on when a website sends out a TCP reset, triggering the Connection reset by peer error?
Like
too many open connections
too high bandwidth use
connected for too long
…?
I'm pretty certain that there is no law governing this and that different websites/web developers have different tastes, but I would be interested if there are some general rule sets (from websites or textbooks on the subject or what you have been taught in school/at work) that are mostly followed.
Reason why I'm asking, of course, is that I want to get around being blocked…
I'm downloading some government data that is freely available, but is lacking an API or something, so the two official ways to get it are either clicking around in some web-GIS a few thousand times or going along the Kafkaesque path of explaining various levels of clerks the concepts of databases, csv files, zip files and that you can't (and won't need to, if they'd just did what you try to explain them) just drive to their agency with a "giant" harddrive, so I'm trying to just go the most resource saving way for everyone involved…
A website is not "sending" a "Connection reset by peer" error. This error is generated by the OS kernel on the client site if it gets a TCP reset for an active connection. There are many reasons this TCP reset might be sent. A TCP reset might be sent by design from some kind of load limit, for example to limit the number of connections from the same IP address within a specific time as a form of DOS protection, to restrict data scraping or to enforce some kind of fair use. There is no general rule or even law for this kind of explicit limits.
A TCP reset might also be caused by the application being overloaded, application crashing, system running out of resources ... .
And a TCP reset will happen if the client writes to a connection which the server already considers as closed. This can happen for example with HTTP keep alive: the server might close the connection on inactivity at any time after the HTTP response was sent. If the client sends a new request on the same connection at the same time the server closes the connection, the server will reject the new request (since the connection is closed on the server end) and will send a TCP RST, causing a connection reset by peer at the client. The client needs to properly handle this situation by creating a new connection and sending the request again (provided that the request was not state changing, i.e. is idempotent).

Should I use UDP or TCP for my Minecraft-style game?

I'm creating a 2D Minecraft style game, where the map is stored in a 2-dimensional int array.
You can place and destroy blocks and ai-controlled characters will walk around the map.
The game's made using xna/c#.
The problem is that I don't have much experience coding networked games.
Which protocol should I use? UDP, TCP? Or perhaps the Lindgren library? (which uses UDP + reliability)
Should I let the following things be done on the client, server or on both?
ai/pathfinding
collision detection
Also, is it good practice to send destroy and place-block messages to the server?
I guess that when the client starts, it first needs to download the map. And then changes to the map will be made in parallel to the maps on the clients and the one on the server...
Finally, should I broadcast the positions of the characters only when they change (tends towards TCP) or should I continually send them (tends towards UDP)?
I feel as if the thread below has some nice information that will be useful to you. Like AmitApollo said, in a nutshell UDP is faster, but less reliable. If all your information you're sending across this network is absolutely vital then TCP might be the best implementation. You could always try both and see what kinds of performance/latency hits you have. In general, most fast paced/realtime games I've read up on have used UDP.
Android game UDP / TCP?
Everything should be verified by the server so that the game is not hackable or modifiable; either by modifying memory addresses or some other vulnerability.
Both AI/pathfinding and collision should be validated by the server however using TCP for both would cause congestion because of the handshake and window overhead. MMO's today use UDP packets with custom congestion control and handshakes. As a first version you should simply use a UDP packet - when packets are lost or dropped in transmission the game will simply lag and freeze until a UDP packet gets through. Subsequent versions of your game could implement a custom acknowledgment setup with UDP such that the character will pause until validated.
Client -- Movement request UDP --> Server
Client: Character is frozen Server: Validate coordinates on map
Client: <-- YES or NO to move request -- Server
Client: Move character based on response
This makes sure that every movement by the characters are valid. You will also need security keys or protocol security so that not just anyone can send coordinates to be validated.
You may think this design will lag your game but if properly designed it will be secure and free from client-side hacking. Remember to design your UDP packets to be as small as possible (size-wise).

Why doesn't using UDP for video-on-demand cause cross-talk?

While reading one of the assignment questions in "Data Communication and Networking" by Behrouz Forouzan, one of the questions asked were using UDP for file-transfer have any adverse effects keeping process crash phenomenon in mind.
The solution to this said that if a process A asked for the file-contents from a server X and soon after the request, A crashed and another process B came up on the same port on the same machine(giving it the same socket address) and sends a request to the same server for another file but the request is lost which makes the server unknown of both the process A crashing and the request being lost and hence, it sends the contents of the file asked by A to B.
Why doesn't this problem occur, in a video-on-demand channel like you-tube or likes?
One of the closest answers I got is this, but it doesn't seem to address my problem:
When is it appropriate to use UDP instead of TCP?
UPDATE: For people who would like to have a read of the question given in the book, I found an online version of the required part, please have a look at the 8th question of the PDF:
http://ceng334.cankaya.edu.tr/uploads/files/file/network%20sample.pdf
In theory the problem could happen but in real life? Not a chance.
Let's say a user wants to stream a video from Youtube with a browser.
Browser must crash - realistically does not happen too often.
New browser instance takes the exact same source UDP port - virtually never happens.
The user decides to look at a different video - makes no sense.
While all this happens, server side does not time out - I don't think so.
This is like arguing that TCP should be used because a packet might get dropped on the wire when two computers are connected back to back with one meter Ethernet cable.

Resources