Why is throughput for a server a function of upstream response time? - http

I have an application server which does nothing but send requests to an upstream service, wait, and then respond to the client with data recieved from the upstream service. The microservice takes Xms to respond, or sometimes Yms, where X<<Y. The client response time is (in steady state) essentially equal to the amount of time the upstream microservice takes to process the request - any additional latency is negligible, as the client, application server, and upstream microservice are all located in the same datacenter, and communicate over private IPs with a very large network bandwidth.
When the client starts sending requests at a rate of N, the application server becomes overloaded and response times spike dramatically as the server becomes unsteady. The client and the microservice have minimal CPU usage, and the application server is at maximum CPU usage. (The application server is on a much weaker baremetal than the other two services - this is a testing environment used to monitor the application server's behavior under stress.)
Intuivetly, I would expect N to be the same value, regardless of how long the microservice is taking to respond, but I'm finding that the maximum throughput in steady state is significantly less when the microservice takes Yms then when it's only taking Xms. The number of ephemeral ports in use when this happens is also significantly less than the limit. Since the amount of reading and writing being done is the same, and memory usage is the same, I can't really figure out why N is a factor of the microservice's execution time. Also, no, the input/output of the services is the same regardless of the execution time, so the amount of bytes being written is the same regardless. Since the only difference is the execution time, which only requires more TCP connections to be used when responses are taking a while, I'm not sure why maximum throughput is affected? From my understanding, the cost of a TCP connection is negligible once it has already been established.
Am I missing something?
Thanks,
Additional details:
The services use HTTP/1.1 with keepalive, with no pipelining.
Also should've mentioned that I'm using an IO-Thread model. If I were using a thread per request I could understand this behavior, but with only a thread per core it's confusing.

Related

Maximum number of gRPC connections benchmark

I want to know how many clients can have an open connection
to an gRPC server running on an average machine.
The clients should connect to the server and open a stream.
Thus I am searching for a benchmark on how many gRPC streams
a gRPC server can handle.
There is currently no such benchmark to my knowledge; however, I will attempt to answer what I think is your question.
In terms of the number of gRPC connections, your typical gRPC server will be bounded by the amount of memory those connections take up. Based on data we've collected in the past, a channel will take up on the order of 40 KB of memory on the server side. So taking into account the amount of memory your server has available, you can estimate the max number of gRPC connections that your server will accept.
If you want to dynamically control how much memory gets used (and, thus, how many connections get accepted), gRPC has a ResourceQuota object that you can configure [1]. If accepting a connection would put the server over the resource quota, your server will instead refuse the connection. This provides a much better alternative to OOM'ing.
[1] https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_resource_quota.html

High response time vs queuing

Say I have a webserivce used internally by other webservices with an average response time of 1 minute.
What are the pros and cons of such a service with "synchronous" responses versus making the service return id of the request, process it in the background and make the clients poll for results?
Is there any cons with HTTP connections which stay active for more than one minute? Does the default keep alive of TCP matters here?
Depending on your application it may matter. Couple of things worth mentioning are !
HTTP protocol is sync
There is very wide misconception that HTTP is async. Http is synchronous protocol but your client could deal it async. E.g. when you call any service using http, your http client may schedule is on the background thread (async). However The http call will be waiting until either it's timeout or response is back , during all this time the http call chain is awaiting synchronously.
Sockets
Since HTTP uses socket and there is hard limit on sockets. Every HTTP connection (if created new every time) opens up new socket . if you have hundreds of requests at a time you can image how many http calls are scheduled synchronously and you may run of sockets. Not sure for other operation system but on windows even if you are done with request sockets they are not disposed straight away and stay for couple of mins.
Network Connectivity
Keeping http connection alive for long is not recommended. What if you loose network partially or completely ? your http request would timeout and you won't know the status at all.
Keeping all these things in mind it's better to schedule long running tasks on background process.
If you keep the user waiting while your long job is running on server, you are tying up a valuable HTTP connection while waiting.
Best practice from RestFul point of view is to reply an HTTP 202 (Accepted) and return a response with the link to poll.
If you want to hang the client while waiting, you should set a request timeout at the client end.
If you've some Firewalls in between, that might drop connections if they are inactive for some time.
Higher Response Throughput
Typically, you would want your OLTP (Web Server) to respond quickly as possible, Since your queuing the task on the background, your web server can handle more requests which results to higher response throughput and processing capabilities.
More Memory Friendly
Queuing long running task on background jobs via messaging queues, prevents abusive usage of web server memory. This is good because it will increase the Out of memory threshold of your application.
More Resilient to Server Crash
If you queue task on the background and something goes wrong, the job can be queued to a dead-letter queue which helps you to ultimately fix problems and re-process the request that caused your unhandled exceptions.

TCP Persistent Connections with HTTP?

So i thought with HTTP 1.1 your TCP connections are sustained for as long are you are communicating with that server? How does it actually work, how does the TCP connection know when you are done writing into the socket? Any formation would be awesome, i have done research but i cant find what im looking for short of reading the RFC.
The typical implementation is that the HTTP server will have a timeout (typically called KeepAliveTimeout or such) after which it will close an idle connection.
A server which reserves a thread or an entire process per connection (such as apache with the usual mpm_prefork or mpm_worker), keepalives are usually disabled entirely or kept quite short (a few seconds). For an event-based server such as nginx which uses much less memory per connection, the keepalive timeout can be left at a much higher value (typically a minute or so).
See section 8.1 of RFC 2616. Basically, HTTP 1.1 treats all connections as persistent but the langauage of the RFC doesn't mandate this behaviour, since it uses the word "SHOULD". If it was mandated, it would use "MUST".
However, the RFC does not specify in detail how an implementation does this. As can be seen from the HTTP Persistent Connection page on Wikipedia, Apache's default timeout (beyond which it returns persistent connections for other uses) may be as low as five seconds. (though this is almost certainly configurable, given all the other knobs and dials that Apache provides).
In other words, it's meant for numerous requests to the same address within a short time frame, so as to not waste time opening and closing a bucket-load of sessions where one will do. Increasing this timeout is not a "free ride", since resources are tied up while the connection is held open. In an environment where you expect lots of incoming clients, tying up these resources can be fatal to performance.

NoSQL / Redis Scaling Theory

Is it more efficient for a key-value data store such as Redis to handle X number of requests over 1 client connection or 1 request per client over X number of client connections?
In theory reusing a connection means that less work is required on the connection overhead so it is technically more efficient. However, in practice latency means that using a single connection is dramatically slower with the server being idle most of the time.
Redis performance is almost never limited by the CPU - it can quite easily serve 100 requests on separate connections in the time it would otherwise spend waiting for the second request on a single connection.

What does concurrent requests really mean?

When we talk about capacity of a web application, we often mention the concurrent requests it could handle.
As my another question discussed, Ethernet use TDM (Time Division Multiplexing) and no 2 signals could pass along the wire simultaneously. So if the web server is connected to the outside world through a Ethernet connection, there'll be literally no concurrent requests at all. All requests will come in one after another.
But if the web server is connected to the outside world through something like a wireless network card, I believe the multiple signals could arrive at the same time through the electro-magnetic wave. Only in this situation, there are real concurrent requests to talk about.
Am I right on this?
Thanks.
I imagine "concurrent requests" for a web application doesn't get down to the link level. It's more a question of the processing of a request by the application and how many requests arrive during that processing.
For example, if a request takes on average 2 seconds to fulfill (from receiving it at the web server to processing it through the application to sending back the response) then it could need to handle a lot of concurrent requests if it gets many requests per second.
The requests need to overlap and be handled concurrently, otherwise the queue of requests would just fill up indefinitely. This may seem like common sense, but for a lot of web applications it's a real concern because the flood of requests can bog down a resource for the application, such as a database. Thus, if the application has poor database interactions (overly complex procedures, poor indexing/optimization, a slow link to a database shared by many other applications, etc.) then that creates a bottleneck which limits the number of concurrent requests the application can handle, even though the application itself should be able to handle them.
.Imagining a http server listening at port 80, what happens is:
a client connects to the server to request some page; it is connecting from some origin IP address, using some origin local port.
the OS (actually the network stack) looks at the incoming request's destination IP (since the server may have more than one NIC) and destination port (80) and verifies that some application is registered to handle data on that port (the http server). The combination of 4 numbers (origin IP, origin port, destination IP, port 80) uniquely identifies a connection. If such a connection does not exists yet, a new one is added to the network stack's internal table and a connection request is passed on to the http server's listening socket. From now on the network stack just passes on data for that connection to the application.
Multiple client can be sending requests, for each one the above happens. So from the network perspective, all happens serially, since data arrives one packet at a time.
From the software perspective, the http server is listening to incoming requests. The number of requests it can have queued before the clients start getting errors is determined by the programmer based on the hardware capacity (this is the first bit of concurrency: there can be multiple requests waiting to be processed). For each one it will create a new socket (as fast as possible in order to continue emptying the request queue) and let the actual processing of the request be done by another part of the application (different threads). These processing routines will (ideally) spend most of their time waiting for data to arrive and react (ideally) quickly to it.
Since usually the processing of data is many times faster than the network I/O, the server can handle many requests while processing network traffic, even if the hardware consist of only one processor. Multiple processors increase this capability. So from the software perspective all happens concurrently.
How the actual processing of the data is implemented is where the key to performance lies (you want it to be as efficient as possible). Several possibilities exist (async socket operations as provided by the Socket class, threadpool, unique threads, the new parallel features from .NET 4).
It's true that no two packets can arrive at the exact same time (unless multiple network cards are in use per Gabe's comment). However, web request usually requires a number of packets. The arrival of these packages is interspersed when multiple requests are coming in at near the same time (whether using wired or wireless access). Also, the processing of these requests can overlap.
Add multi-threading (or multiple processors / cores) to the picture, and you can see how lengthy operations such as reading from a database (which requires a lot of waiting around for a response) can easily overlap even though the individual packets are arriving in a serial fashion.
Edit: Added note above to incorporate Gabe's feedback.

Resources