I have a simple Statsd -> Carbon -> Graphite back end setup for one of my processes.
Graphite is configured very simply at 60s:1440m (1 minute accuracy for 24h), and the counter I'm using is just summed up. Nothing fancy done there.
However, it's only displaying a single 10s segment of every minute and I can't seem to figure out why. If I send one statsd packet every minute, I have to send it at :50-:59 of the minute, or it gets overwritten. Sending packets once every 10s means only one of those packets will get saved for storage. I think it's something to do with a simple config option, but haven't had any luck figuring it out thus far. Help?
Fixed. Turns out the Statsd backend service was sending aggregated packets to Graphite every 10s, while my Graphite config was set on 60s accuracy.
Related
I want to measure latency across two Linux boxes connected directly via a 10 gig optical fibre. Basically I want to measure RTT latency after a packet sent has been received back on the same machine. So basically client will send a packet to server and take the current time, server will return the packet back to client and second time stamp will be taken once the packet is received. Total latency will be difference of two time stamp.
I would like to meausure latency for both UDP and TCP protocols.
I have tried using sockperf and it claims doing similar things but I want something very simple one file code which I can use for bench-marking while understanding fully.
Can you share any links of simple program to do this? Please not my interest is in only latensy and not in throughput.
Sync the time in the two Linux box. Form a data buffer , filling the time stamp in the header & dummy data in the payload. Then send the data over the TCP/UDP socket to the other end & echo the data from the other end. Calculate the elapsed time from the header time stamp which would give you the accurate RTT.
Because of geographic distance between server and client network latency can vary a lot. So I want to get "pure" req. processing time of service without network latency.
I want to get network latency as TCP connecting time. As far as I understand this time depends a lot on network.
Main idea is to compute:
TCP connecting time,
TCP first packet receive time,
Get "pure" service time = TCP first packet receive (waiting time) – TCP connecting.
I divide TCP connecting by 2 because in fact there are 2 requests-response (3-way handshake).
I have two questions:
Should I compute TCP all packets receive time instead of only first packet?
Is this method okay in general?
PS: As a tool I use Erlang's gen_tcp. I can show the code.
If at all, i guess the "pure" service time = TCP first packet receive - TCP connecting.. You have written other way round.
A possible answer to your first questions is , you should ideally compute atleast some sort of average by considering pure service time of many packets rather than just first packet.
Ideally it can also have worst case, average case, best case service times.
For second question to be answered we would need why would you need pure service time only. I mean since it is a network application, network latencies(connection time etc...) should also be included in the "response time", not just pure service time. That is my view based on given information.
I have worked on a similar question when working for a network performance monitoring vendor in the past.
IMHO, there are a certain number of questions to be asked before proceeding:
connection time and latency: if you base your network latency metric, beware that it takes into account 3 steps: Client sends a TCP/SYN, Server responds with a TCP/SYN-ACK, the Client responds by a final ACK to set up the TCP connection. This means that the CT is equivalent to 1.5 RTT (round trip time). This validates taking the first two steps of the TCP setup process in acccount like you mention.
Taking in account later TCP exchanges: while this first sounds like a great idea to keep evaluating network latency in the course of the session, this becomes a lot more tricky. Here is why: 1. Not all packets have to be acknowledged (RFC1122 or https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment) , which will generate false measurements when it occurs, so you will need an heuristic to take these off your calculations. 2. Not all systems consider acknowledging packets a high priority tasks. This means that some high values will pollute your network latency data and simply reflect the level of load of the server for example.
So if you use only the first (and reliable) measurement you may miss some network delay variation (especially in apps using long lasting TCP sessions).
I'm running my app on my local machine and I'm making ajax requests from Chrome. When I make a request, I see that the network tab shows 2 numbers in the Time column.
How should I interpret these numbers? The app is making a database call and then processes the data before sending it to the client. On the first row, it shows 133/106; does this mean that once the requests hits the local machine it only takes 27ms to process on the server?
Thanks.
Latency, the time between making the request and the server's first response, is shown in the lighter shade within each bar.
106ms is Latency. 133ms is Time. 27ms is receive data time.
Have a look at chrome network tab timeline view.
Waiting is time spent waiting for initial response from server. Receiving is time spent receiving the response data.
So, roughly,
Latency = Waiting
Time = Waiting + Receiving
Below is a apache bench run for 10K requests with 50 concurrent threads.
I need help understanding the results, does anything stand out in the results that might be pointing to something blocking and restricting more requests per second?
I'm looking at the connection time section, and see 'waiting' and 'processing'. It shows the mean time for waiting is 208, and the mean time to connect is 0 and processing is 208..yet the total is 208. Can someone explain this to me as it doesn't make much sense to me.
Connect time is time it took ab to establish connection with your server. you are probably running it on same server or within LAN, so your connect time is 0.
Processing time is total time server took to process and send complete response.
Wait time is time between sending request and receiving 1st byte of response.
Again, since you are running on same server, and small size of file, your processing time == wait time.
For real benchmark, try ab from multiple points near your target market to get real idea of latency. Right now all the info you have is the wait time.
This question is getting old, but I've run into the same problem so I might as well contribute an answer.
You might benefit from disabling either TCP nagle on the agent side, or ACK delay on the server side. They can interact badly and cause an unwanted delay. Like me, that's probably why your minimum time is exactly 200ms.
I can't confirm, but my understanding is that the problem is cross-platform since it's part of the TCP spec. It might be just for quick connections with a small amount of data sent and received, though I've seen reports of issues for larger transfers too. Maybe somebody who knows TCP better can pitch in.
Reference:
http://en.wikipedia.org/wiki/TCP_delayed_acknowledgment#Problems
http://blogs.technet.com/b/nettracer/archive/2013/01/05/tcp-delayed-ack-combined-with-nagle-algorithm-can-badly-impact-communication-performance.aspx
I am submitting POST requests to an external server running IIS6. This is a time critical request where I want to ensure that my request is processed at a specific time (say 10:00:00 AM). No earlier. And I want to ensure that at that specific time, my request is assigned the highest priority over other requests. Would any of this help:
Sending most of the message a few seconds early and sending the last byte or so a few milliseconds prior to 10:00:00. Not sure if this will help as I will be competing with other requests that come in around that time. Will IIS assign a higher priority to my request based on how long I am connected?
Anything that I can add to the message header to tell the server to queue my request and process only at a specific time?
Any known hacks that I can leverage?
No - HTTP is not a real time protocol. It usually runs on top of TCP/IP which is not a real time protocol. While you can get near real-time behaviour out of such an architecture its far from simple - don't take my word for it - go read the source code for xntpd.
Having said that you give no details of the actual level of precision you require - but your post implies that it could be up to a second - which is a very long time for submitting a request to a webserver. On the other hand, scheduling such an event to fire client side with this level of accuracy is very difficult - I've not tried measuring the accuracy of the scheduler on MSWindowsNT but elsewhere I'd only expect it to be accurate to about 5 minutes. So you'd need to schedule the job to start 5 minutes early then sleep for 10 milliseconds at a time until the target time rolls around.
But then again, thinking about why you need to run any job with any sort of timing accuracy makes me think that you're trying to solve the problem the wrong way.
C.
It sounds like you need more of a scheduler system then trying to use http. HTTP is a stateless protocol, you send a request to IIS, you get a response.
What you might want to consider is taking that request, and then storing the information you require somewhere (database). Then using some sort of scheduler (cronjobs, scheduled tasks) you action that information at the desired time.
What you want, you probably can't achieve with IIS, it's not what it is designed to do.