I am recording how long each request takes by capturing Date.now() before and after the request.
I am doing this because the inbuild metric for the response time only records the time taken for the FIRST REQUEST and not for any redirects that it follows.
My method was working fine until I started using the rps option.
The rps option throttles how many requests per second are sent.
The problem that this is causing is that my manual calculations are going up even though the HTTP_REQ_DURATION is roughly the same.
I presume this is because of the RPS throttle i.e. it is WAITING and this is causing my calc using Date.now() to go up - which is not an accurate reflection of what is happening.
How can I calculate the total time taken for a response to a request including all redirects when I am using the rps option?
I'd advise against using the RPS option and using an arrival-rate executor instead, for example, constant-arrival-rate.
Alternatively, you can set the maxRedirects option to 0, so k6 doesn't handle redirects itself. Then, when you handle the redirects yourself, you can get the Response object for each of the requests, not just the last one. Then you can sum their Response.timings.duration (or whatever you care about) and add the result in your custom metric, it will not contain any artificial delays caused by --rps.
Background: I am developing a small game and use the player's latency to do lag compensation. The game is open sourced, so at the moment it is a very easy task to reverse engineer the system and delay ones response time to artificially increment ones reported delay, resulting in possibly unfair advantages.
My current strategy for latency retrieval is:
Every fixed interval I send a message labeled as "ping" to a player. (This has nothing to do with ICMP)
This ping message consists of a special "ping" opcode and a payload with a sequence number
Once the client receives said message, he sends back one with a "pong" opcode and a payload with the same sequence number
When the server receives the message labeled as "pong", it calculates how much time passed in between sending and receiving. This is the round trip time
Our latency is the rtt / 2
In pseudo code
Server:
function now() {
return current UTC time in millis
}
i = 0
function nextSequence() {
return i++
}
sendingTimestamps = []
function onPingEvent() {
id = nextSequence()
sendingTimestamps[id] = now()
sendPingMessage(id)
}
function onPongReceived(id) {
received = now()
sent = sendingTimestamps[id]
rtt = received - sent
latency = rtt / 2
}
Client:
function onPingReceived(id) {
sendPongMessage(id)
}
As you can see, it's very easy for the client to just add a delay in his code to inflate his reported latency.
Is there a better way to get a clients latency in order to leave them less room for cheating?
Answer below is a summary of topics discussed in comments to have them all in one place.
Lag compensation should rely on precise time stamp of event rather than average packet delay
Transition time may drastically vary even for two successive packets. Suggested approach with measuring average latency and assuming, that each received packet was sent "latency" ms ago for lag compensation is way too inaccurate. The following scheme should be applied instead:
Server starts emulating world on its side and sends command START to all clients. Clients initiate emulating world and count ticks from its creation. Whenever any event occurs on client side, client sends it with timestamp to server. Like "user pressed fire at tick #183". Server's emulation of game is far ahead due to packet transition time, but server can "go back in time" to handle user's order and resolve consequences.
Time stamps and events still can be faked
AFAIU problem of verifying client input is generally unsolvable. Any algorithm implemented in client can be recreated to fake events/timestamps/packets. Closed code can be reversed, so it is not an answer. Even world wide spread games like Counter-Strike or OverWatch have cheaters, despite they are developed by large companies, which, I bet, have separate department focused solely on game security. Some companies develop antivirus like modules, which check game file integrity or hash of parts of RAM snapshot, but it still can be bypassed.
The question is amount of efforts required to fake algorithm. The more efforts needed the less fakers will be. Trivial timestamp verifycation is the following:
If you receive event#2 in TCP stream after event#1, but its time stamp is before event#1, then it's faked.
If time stamp is far behind server's time, then warn and kick player for enormously bad delay. If it's a real player, the game anyway is unplayable for him, otherwise you kicked hacker. CS servers do this if I'm not mistaken.
I am looking for possible techniques to gracefully handle race conditions in network protocol design. I find that in some cases, it is particularly hard to synchronize two nodes to enter a specific protocol state. Here is an example protocol with such a problem.
Let's say A and B are in an ESTABLISHED state and exchange data. All messages sent by A or B use a monotonically increasing sequence number, such that A can know the order of the messages sent by B, and A can know the order of the messages sent by B. At any time in this state, either A or B can send a ACTION_1 message to the other, in order to enter a different state where a strictly sequential exchange of message needs to happen:
send ACTION_1
recv ACTION_2
send ACTION_3
However, it is possible that both A and B send the ACTION_1 message at the same time, causing both of them to receive an ACTION_1 message, while they would expect to receive an ACTION_2 message as a result of sending ACTION_1.
Here are a few possible ways this could be handled:
1) change state after sending ACTION_1 to ACTION_1_SENT. If we receive ACTION_1 in this state, we detect the race condition, and proceed to arbitrate who gets to start the sequence. However, I have no idea how to fairly arbitrate this. Since both ends are likely going to detect the race condition at about the same time, any action that follows will be prone to other similar race conditions, such as sending ACTION_1 again.
2) Duplicate the entire sequence of messages. If we receive ACTION_1 in the ACTION_1_SENT state, we include the data of the other ACTION_1 message in the ACTION_2 message, etc. This can only work if there is no need to decide who is the "owner" of the action, since both ends will end up doing the same action to each other.
3) Use absolute time stamps, but then, accurate time synchronization is not an easy thing at all.
4) Use lamport clocks, but from what I understood these are only useful for events that are causally related. Since in this case the ACTION_1 messages are not causally related, I don't see how it could help solve the problem of figuring out which one happened first to discard the second one.
5) Use some predefined way of discarding one of the two messages on receipt by both ends. However, I cannot find a way to do this that is unflawed. A naive idea would be to include a random number on both sides, and select the message with the highest number as the "winner", discarding the one with the lowest number. However, we have a tie if both numbers are equal, and then we need another way to recover from this. A possible improvement would be to deal with arbitration once at connection time and repeat similar sequence until one of the two "wins", marking it as favourite. Every time a tie happens, the favourite wins.
Does anybody have further ideas on how to handle this?
EDIT:
Here is the current solution I came up with. Since I couldn't find 100% safe way to prevent ties, I decided to have my protocol elect a "favorite" during the connection sequence. Electing this favorite requires breaking possible ties, but in this case the protocol will allow for trying multiple times to elect the favorite until a consensus is reached. After the favorite is elected, all further ties are resolved by favoring the elected favorite. This isolates the problem of possible ties to a single part of the protocol.
As for fairness in the election process, I wrote something rather simple based on two values sent in each of the client/server packets. In this case, this number is a sequence number starting at a random value, but they could be anything as long as those numbers are fairly random to be fair.
When the client and server have to resolve a conflict, they both call this function with the send (their value) and the recv (the other value) values. The favorite calls this function with the favorite parameter set to TRUE. This function is guaranteed to give the opposite result on both ends, such that it is possible to break the tie without retransmitting a new message.
BOOL ResolveConflict(BOOL favorite, UINT32 sendVal, UINT32 recvVal)
{
BOOL winner;
int sendDiff;
int recvDiff;
UINT32 xorVal;
xorVal = sendVal ^ recvVal;
sendDiff = (xorVal < sendVal) ? sendVal - xorVal : xorVal - sendVal;
recvDiff = (xorVal < recvVal) ? recvVal - xorVal : xorVal - recvVal;
if (sendDiff != recvDiff)
winner = (sendDiff < recvDiff) ? TRUE : FALSE; /* closest value to xorVal wins */
else
winner = favorite; /* break tie, make favorite win */
return winner;
}
Let's say that both ends enter the ACTION_1_SENT state after sending the ACTION_1 message. Both will receive the ACTION_1 message in the ACTION_1_SENT state, but only one will win. The loser accepts the ACTION_1 message and enters the ACTION_1_RCVD state, while the winner discards the incoming ACTION_1 message. The rest of the sequence continues as if the loser had never sent ACTION_1 in a race condition with the winner.
Let me know what you think, and how this could be further improved.
To me, this whole idea that this ACTION_1 - ACTION_2 - ACTION_3 handshake must occur in sequence with no other message intervening is very onerous, and not at all in line with the reality of networks (or distributed systems in general). The complexity of some of your proposed solutions give reason to step back and rethink.
There are all kinds of complicating factors when dealing with systems distributed over a network: packets which don't arrive, arrive late, arrive out of order, arrive duplicated, clocks which are out of sync, clocks which go backwards sometimes, nodes which crash/reboot, etc. etc. You would like your protocol to be robust under any of these adverse conditions, and you would like to know with certainty that it is robust. That means making it simple enough that you can think through all the possible cases that may occur.
It also means abandoning the idea that there will always be "one true state" shared by all nodes, and the idea that you can make things happen in a very controlled, precise, "clockwork" sequence. You want to design for the case where the nodes do not agree on their shared state, and make the system self-healing under that condition. You also must assume that any possible message may occur in any order at all.
In this case, the problem is claiming "ownership" of a shared clipboard. Here's a basic question you need to think through first:
If all the nodes involved cannot communicate at some point in time, should a node which is trying to claim ownership just go ahead and behave as if it is the owner? (This means the system doesn't freeze when the network is down, but it means you will have multiple "owners" at times, and there will be divergent changes to the clipboard which have to be merged or otherwise "fixed up" later.)
Or, should no node ever assume it is the owner unless it receives confirmation from all other nodes? (This means the system will freeze sometimes, or just respond very slowly, but you will never have weird situations with divergent changes.)
If your answer is #1: don't focus so much on the protocol for claiming ownership. Come up with something simple which reduces the chances that two nodes will both become "owner" at the same time, but be very explicit that there can be more than one owner. Put more effort into the procedure for resolving divergence when it does happen. Think that part through extra carefully and make sure that the multiple owners will always converge. There should be no case where they can get stuck in an infinite loop trying to converge but failing.
If your answer is #2: here be dragons! You are trying to do something which buts up against some fundamental limitations.
Be very explicit that there is a state where a node is "seeking ownership", but has not obtained it yet.
When a node is seeking ownership, I would say that it should send a request to all other nodes, at intervals (in case another one misses the first request). Put a unique identifier on each such request, which is repeated in the reply (so delayed replies are not misinterpreted as applying to a request sent later).
To become owner, a node should receive a positive reply from all other nodes within a certain period of time. During that wait period, it should refuse to grant ownership to any other node. On the other hand, if a node has agreed to grant ownership to another node, it should not request ownership for another period of time (which must be somewhat longer).
If a node thinks it is owner, it should notify the others, and repeat the notification periodically.
You need to deal with the situation where two nodes both try to seek ownership at the same time, and both NAK (refuse ownership to) each other. You have to avoid a situation where they keep timing out, retrying, and then NAKing each other again (meaning that nobody would ever get ownership).
You could use exponential backoff, or you could make a simple tie-breaking rule (it doesn't have to be fair, since this should be a rare occurrence). Give each node a priority (you will have to figure out how to derive the priorities), and say that if a node which is seeking ownership receives a request for ownership from a higher-priority node, it will immediately stop seeking ownership and grant it to the high-priority node instead.
This will not result in more than one node becoming owner, because if the high-priority node had previously ACKed the request sent by the low-priority node, it would not send a request of its own until enough time had passed that it was sure its previous ACK was no longer valid.
You also have to consider what happens if a node becomes owner, and then "goes dark" -- stops responding. At what point are other nodes allowed to assume that ownership is "up for grabs" again? This is a very sticky issue, and I suspect you will not find any solution which eliminates the possibility of having multiple owners at the same time.
Probably, all the nodes will need to "ping" each other from time to time. (Not referring to an ICMP echo, but something built in to your own protocol.) If the clipboard owner can't reach the others for some period of time, it must assume that it is no longer owner. And if the others can't reach the owner for a longer period of time, they can assume that ownership is available and can be requested.
Here is a simplified answer for the protocol of interest here.
In this case, there is only a client and a server, communicating over TCP. The goal of the protocol is to two system clipboards. The regular state when outside of a particular sequence is simply "CLIPBOARD_ESTABLISHED".
Whenever one of the two systems pastes something onto its clipboard, it sends a ClipboardFormatListReq message, and transitions to the CLIPBOARD_FORMAT_LIST_REQ_SENT state. This message contains a sequence number that is incremented when sending the ClipboardFormatListReq message. Under normal circumstances, no race condition occurs and a ClipboardFormatListRsp message is sent back to acknowledge the new sequence number and owner. The list contained in the request is used to expose clipboard data formats offered by the owner, and any of these formats can be requested by an application on the remote system.
When an application requests one of the data formats from the clipboard owner, a ClipboardFormatDataReq message is sent with the sequence number, and format id from the list, the state is changed to CLIPBOARD_FORMAT_DATA_REQ_SENT. Under normal circumstances, there is no change of clipboard ownership during that time, and the data is returned in the ClipboardFormatDataRsp message. A timer should be used to timeout if no response is sent fast enough from the other system, and abort the sequence if it takes too long.
Now, for the special cases:
If we receive ClipboardFormatListReq in the CLIPBOARD_FORMAT_LIST_REQ_SENT state, it means both systems are trying to gain ownership at the same time. Only one owner should be selected, in this case, we can keep it simple an elect the client as the default winner. With the client as the default owner, the server should respond to the client with ClipboardFormatListRsp consider the client as the new owner.
If we receive ClipboardFormatDataReq in the CLIPBOARD_FORMAT_LIST_REQ_SENT state, it means we have just received a request for data from the previous list of data formats, since we have just sent a request to become the new owner with a new list of data formats. We can respond with a failure right away, and sequence numbers will not match.
Etc, etc. The main issue I was trying to solve here is fast recovery from such states, with going into a loop of retrying until it works. The main issue with immediate retrial is that it is going to happen with timing likely to cause new race conditions. We can solve the issue by expecting such inconsistent states as long as we can move back to proper protocol states when detecting them. The other part of the problem is with electing a "winner" that will have its request accepted without resending new messages. A default winner can be elected by default, such as the client or the server, or some sort of random voting system can be implemented with a default favorite to break ties.
I’ve read some articles about client-side prediction and server reconciliation but I'm missing some parts, I take the part of client side prediction but I don’t understand how exactly is reconciliation done. I’ll take these two pieces of well-known articles as reference:
http://www.gabrielgambetta.com/fpm2.html
#2. So applying client-side prediction again, the client can calculate the “present” state of the game based on the last authoritative state sent by the server, plus the inputs the server hasn’t processed yet
http://gafferongames.com/networking-for-game-programmers/what-every-programmer-needs-to-know-about-game-networking/
In effect the client invisibly “rewinds and replays” the last n frames of local player character movement while holding the rest of the world fixed
Ok, I take that the client receives an acknowledgement from the server, but how exactly are the inputs re-applied? I can interpret this in two ways.
From the client point of view, where the game loop is executed ‘x’ times per second (frames per second)
First: The non-processed inputs are re-applied in the same frame, so here the expression “invisibly rewind and replay “ fits perfect because in the end what you see in the screen is the result for the last input re-applied.
I don’t see the benefit of doing this because I see no difference between re-applying the last n inputs from the server update to the present time and keeping the client state as it was before the update, we know in advance that the result will be the same.
Second: The inputs are re-applied one by one in the consecutive frames . A human being couldn’t notice a few frames being replayed but I cannot help thinking that if the client were experiencing significant latency he could notice himself going back to the past and replaying the last ‘n’ frames.
Can anyone point me in the right direction , please? Thanks
I know it's been quite a while since you've posted this question, but it is on google's feed, so I'll answer.
I don’t see the benefit of doing this because I see no difference
between re-applying the last n inputs from the server update to the
present time and keeping the client state as it was before the update,
we know in advance that the result will be the same.
The whole point of reconciliation is to sync with the server. We don't really know what the result of our actions will be. We just predict it. Sometimes the result actually is different and we still want to get an image of what's going on on the server.
The first way is definitely the way to go.
The second way doesn't really make any sense. Remember that the player receives updates on a regular basis. That means that with a latency of 200 ms he will see his character about 200 ms in past all the time.
I don’t see the benefit of doing this because I see no difference between re-applying the last n inputs from the server update to the present time and keeping the client state as it was before the update, we know in advance that the result will be the same.
You are correct if and only if the predicted GameState match the server GameState (the one we just received) so there is no reason to do any reconciliation. However, if they don't match, reapplying the inputs would give us a different result. That's when you apply server reconciliation.
I'm adding networked multiplayer to a game I've made. When the server sends an update packet to the client, I include a timestamp so that the client knows exactly when that information is valid. However, the server computer and the client computer might have their clocks set to different times (maybe even just a few seconds difference), so the timestamp from the server needs to be translated to the client's local time.
So, I'd like to know the best way to calculate the time difference between the server and the client. Currently, the client pings the server for a time stamp during initialization, takes note of when the request was sent and when it was answered, and guesses that the time stamp was generated roughly halfway along the journey. The client also runs 10 of these trials and takes the average.
But, the problem is that I'm getting different results over repeated runs of the program. Within each set of 10, each measurement rarely diverges by more than 400 milliseconds, which might be acceptable. But if I wait a few minutes between each run of the program, the resulting averages might disagree by as much as 2 seconds, which is not acceptable.
Is there a better way to figure out the difference between the clocks of two networked devices? Or is there at least a way to tweak my algorithm to yield more accurate results?
Details that may or may not be relevant: The devices are iPod Touches communicating over Bluetooth. I'm measuring pings to be anywhere from 50-200 milliseconds. I can't ask the users to sync up their clocks. :)
Update: With the help of the below answers, I wrote an objective-c class to handle this. I posted it on my blog: http://scooops.blogspot.com/2010/09/timesync-was-time-sink.html
I recently took a one-hour class on this and it wasn't long enough, but I'll try to boil it down to get you pointed in the right direction. Get ready for a little algebra.
Let s equal the time according to the server. Let c equal the time according to the client. Let d = s - c. d is what is added to the client's time to correct it to the server's time, and is what we need to solve for.
First we send a packet from the server to the client with a timestamp. When that packet is received at the client, it stores the difference between the given timestamp and its own clock as t1.
The client then sends a packet to the server with its own timestamp. The server sends the difference between the timestamp and its own clock back to the client as t2.
Note that t1 and t2 both include the "travel time" t of the packet plus the time difference between the two clocks d. Assuming for the moment that the travel time is the same in both directions, we now have two equations in two unknowns, which can be solved:
t1 = t - d
t2 = t + d
t1 + d = t2 - d
d = (t2 - t1)/2
The trick comes because the travel time is not always constant, as evidenced by your pings between 50 and 200 ms. It turns out to be most accurate to use the timestamps with the minimum ping time. That's because your ping time is the sum of the "bare metal" delay plus any delays spent waiting in router queues. Every once in a while, a lucky packet gets through without any queuing delays, so you use that minimum time as the most repeatable time.
Also keep in mind that clocks run at different rates. For example, I can reset my computer at home to the millisecond and a day later it will be 8 seconds slow. That means you have to continually readjust d. You can use the slope of various values of d computed over time to calculate your drift and compensate for it in between measurements, but that's beyond the scope of an answer here.
Hope that helps point you in the right direction.
Your algorithm will not be much more accurate unless you can use some statistical methods. First of all, 10 is probably not sufficient. The first and simplest change would be to gather 100 transit time samples and toss out the x longest and shortest.
Another thing to add would be that both clients send their own timestamp in each packet. Then you can also calculate how different their clocks are and check the average difference between the clocks.
You can also check up on STNP and NTP implementations specifically, as these protocols do this specifically.