I have a Wookie-based app accepting requests behind nginx. The app works in general, but I'm running into some issues with parallel requests. For instance, when the app accepts a long-running request (R1) to generate a report from a dataset in the database (mongodb, via cl-mongo), it would appear unresponsive to any following request (R2) that comes in before the response to R1 starts being sent over the network.
The client reports an error in communicating with the server for R2, but after the server finishes with R1 and finally sends the reponse, it tries to process R2 (as evident from debugging output) -- performs proper routing etc (only too late).
Putting blackbird promises around the request processing routines didn't help (and was probably excessive anyway as Wookie is designed to be async).
So what's the proper way to handle this? I'm probably okay with clients waiting for a long time for their responses (via very long timeouts), but it would be much better to process short requests in parallel.
The idea of the underlying libraries (libevent2, libuv) of cl-async, is to use IO wait time of one task (request) for CPU time of another task (request). So it is just a mechanism to not waste IO wait time. The only thing happening in parallel is IO and at most one task using the CPU at a time (per thread/process depending on implementation).
If your requests need on avarage x ms of CPU time, then as soon as you have n requests in parallel, where n is the number of cores, your n+1st requests has to wait at least x ms, regardless of whether you use a threaded or event based server.
You can of course spawn more server processes and use load balancing to make use of all available cores.
Related
We have a bus reservation system running in GKE in which we are handling the creation of such reservations with different threads. Due to that, CRUD java methods can sometimes run simultaneously referring to the same bus, resulting in the save in our DB of the LAST simultaneous update only (so the other simultaneous updates are lost).
Even if the probabilities are low (the simultaneous updates need to be really close, 1-2 seconds), we need to avoid this. My question is about how to address the solution:
Lock the bus object and return error to the other simultaneous requests
In-memory map or Redis caché to track the bus requests
Use GCP Pub/Sub, Kafka or RabbitMQ as a queue system.
Try to focus the efforts on reducing the simultaneous time window (reduce from 1-2 seconds up to milliseconds)
Others?
Also, we are worried if in the future the GKE requests handling scalability may be an issue. If we manage a relatively higher number of buses, should we need to implement a queue system between the client and the server? Or GKE load balancer & ambassador will already manages it for us? In case we need a queue system in the future, could it be used also for the collision problem we are facing now?
Last, the reservation requests from the client often takes a while. Therefore, we are changing the requests to be handled asynchronously with a long polling approach from the client to know the task status. Could we link this solution to the current problem? For example, using the Redis caché or the queue system to know the task status? Or should we try to keep the requests synchronous and focus on reducing the processing time (it may be quite difficult).
I have a Tomcat Server running which connects to another Tableau server. I need to make about 25 GET calls from Tomcat to Tableau. Now I am trying to thread this and let each thread create its own HTTP connection object and make the call. On my local system (Tomcat local, Tableau is remote), I notice that in this case each of my thread takes about 10 seconds average, so in total 10 seconds.
However, if I do this sequentially, each request takes 2 seconds, thereby total of 50.
My doubt is, when making requests in parallel, why does each take more than 2 seconds when it takes just 2 when done sequentially?
Does this have anything to do with maximum concurrent connections to same domain from one client (browser)? But here the request is going from my Tomcat server, not browser.
If yes, what is the default rule and is there any way to change that?
In my opinion, its most likely Context Switching Overhead that the system has to go through for each request and that is why you see longer times for individual requests ( compared to one sequential thread ) but significant gain in overall processing.
It makes sense to go for parallel processing when Context Switching Overhead is negligible compared to time taken in overall activity.
I'm running a Google Cloud Compute VM as my application server for an app that's available on iOS and Android. The server runs Django within uWSGI, fronted with nginx. The communication between uWSGI and nginx happens through a unix file socket.
Recently I started noticing timeouts at client end. I did a bit of experimentation, and found that uWSGI sometimes errors out while writing data to the file socket. When I increase the 'max-time' parameter at the client end, it goes through smoothly. For example, a sample request that returns about 200KB of json data, takes about 1 sec for Django to compute. But the UNIX socket seems to take another 1-2 secs, which seems too high for a 200KB response. If the client is expecting a response within 2 secs, this often leads to a write error (as shown in the screenshot below) at uWSGI. When I increase the timeout at the client end, it goes through smoothly.
I want to know if there are some configuration changes that can make reading and writing on a UNIX socket faster. 200KB is a very minor size for a JSON response from my server - so I won't be able to bring it down. And I can't have a timeout of more than 2 secs at my client (iOS or Android), for business reasons.
Several unix entities are represented by files but are no file at all. Pipes and sockets are examples of entities represented by files that are not files.
So, writing, and reading from a unix socket is not bound to file system I/O and does not share file system time responses. In fact, unix socket is one of fastest ways of IPC, being more efficient than a TCP socket, since it does not use network I/O at all.
That stated, here is some hints on how to solve your particular problem:
Evaluate your app for performance issues. Profile it and check where it might be spending too much time. Usually, I/O is the main villain on performance issues. Also, bad algorithms, linear searches on long lists are also common guilties.
Check your configuration on both web server and your application gateway.
Check processes scheduling. If everybody is running on the same box, process concurrency may be an issue for heavy loads. Be sure to have all processes running under proper priorities.
Good luck!
A bit of a long description below, but it is a quite tricky problem. I have tried to cover what we do know about the problem in order to narrow down the search. The question is more of an ongoing investigation than a single-question based one but I think it may help others as well. But please add information in comments or correct me if you think I am wrong about some assumptions below.
UPDATE 19/2, 2013: We have cleared some question marks in this and I have a theory of what the main problem is which I'll update below. Not ready to write a "solved" response to it yet though.
UPDATE 24/4, 2013: Things have been stable in production (though I believe it is temporary) for a while now and I think it is due to two reasons. 1) port increase, and 2) reduced number of outgoing (forwarded) requests. I'll continue this update futher down in the correct context.
We are currently doing an investigation in our production environment to determine why our IIS web server does not scale when too many outgoing asynchronous web service requests are being done (one incoming request may trigger multiple outgoing requests).
CPU is only at 20%, but we receive HTTP 503 errors on incoming requests and many outgoing web requests get the following exception: “SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full” Clearly there is a scalability bottleneck somewhere and we need to find out what it is and if it is possible to solve it by configuration.
Application context:
We are running IIS v7.5 integrated managed pipeline using .NET 4.5 on Windows 2008 R2 64 bit operating system. We use only 1 worker process in IIS. Hardware varies slightly but the machine used for examining the error is an Intel Xeon 8 core (16 hyper threaded).
We use both asynchronous and synchronous web requests. Those that are asynchronous are using the new .NET async support to make each incoming request make multiple HTTP requests in the application to other servers on persisted TCP connections (keep-alive). Synchronous request execution time is low 0-32 ms (longer times occur due to thread context switching). For the asynchronous requests, execution time can be up to 120 ms before the requests are aborted.
Normally each server serves up to ~1000 incoming requests. Outgoing requests are ~300 requests/sec up to ~600 requests/sec when problem starts to arise. Problems only occurs when outgoing async. requests are enabled on the server and we go above a certain level of outgoing requests (~600 req./s).
Possible solutions to the problem:
Searching the Internet on this problem reveals a plethora of possible solutions candidates. Though, they are very much dependent upon versions of .NET, IIS and operating system so it takes time to find something in our context (anno 2013).
Below is a list of solution candidates and the conclusions we have come to so far with regards to our configuration context. I have categorised the detected problem areas, so far in the following main categories:
Some queue(s) fill up
Problems with TCP connections and ports (UPDATE 19/2, 2013: This is the problem)
Too slow allocation of resources
Memory problems (UPDATE 19/2, 2013: This is most likely another problem)
1) Some queue(s) fill up
The outgoing asynchronous request exception message does indicate that some queue of buffer has been filled up. But it does not say which queue/buffer. Via the IIS forum (and blog post referenced there) I have been able to distinguish 4 of possibly 6 (or more) different types of queues in the request pipeline labeled A-F below.
Though it should be stated that of all the below defined queues, we see for certain that the 1.B) ThreadPool performance counter Requests Queued gets very full during the problematic load. So it is likely that the cause of the problem is in .NET level and not below this (C-F).
1.A) .NET Framework level queue?
We use the .NET framework class WebClient for issuing the asynchronous call (async support) as opposed to the HttpClient that we experienced had the same issue but with far lower req/s threshold. We do not know if the .NET Framework implementation hides any internal queue(s) or not above the Thread pool. We don’t think this is the case.
1.B) .NET Thread Pool
The Thread pool acts as a natural queue since the .NET Thread (default) Scheduler is picking threads from the thread pool to be executed.
Performance counter: [ASP.NET v4.0.30319].[Requests Queued].
Configuration possibilities:
(ApplicationPool) maxConcurrentRequestsPerCPU should be 5000 (instead of previous 12). So in our case it should be 5000*16=80.000 requests/sec which should be sufficient enough in our scenario.
(processModel) autoConfig = true/false which allows some threadPool related configuration to be set according to machine configuration. We use true which is a potential error candidate since these values may be set erroneously for our (high) need.
1.C) Global, process wide, native queue (IIS integrated mode only)
If the Thread Pool is full, requests starts to pile up in this native (not-managed) queue.
Performance counter:[ASP.NET v4.0.30319].[Requests in Native Queue]
Configuration possibilities: ????
1.D) HTTP.sys queue
This queue is not the same queue as 1.C) above. Here’s an explanation as stated to me “The HTTP.sys kernel queue is essentially a completion port on which user-mode (IIS) receives requests from kernel-mode (HTTP.sys). It has a queue limit, and when that is exceeded you will receive a 503 status code. The HTTPErr log will also indicate that this happened by logging a 503 status and QueueFull“.
Performance counter: I have not been able to find any performance counter for this queue, but by enabling the IIS HTTPErr log, it should be possible to detect if this queue gets flooded.
Configuration possibilities: This is set in IIS on the application pool, advanced setting: Queue Length. Default value is 1000. I have seen recommendations to increase it to 10.000. Though trying this increase has not solved our issue.
1.E) Operating System unknown queue(s)?
Although unlikely, I guess the OS could actually have a queue somewhere in between the network card buffer and the HTTP.sys queue.
1.F) Network card buffer:
As request arrive to the network card, it should be natural that they are placed in some buffer in order to be picked up by some OS kernel thread. Since this is kernel level execution, and thus fast, it is not likely that it is the culprit.
Windows Performance Counter: [Network Interface].[Packets Received Discarded] using the network card instance.
Configuration possibilities: ????
2) Problems with TCP connections and ports
This is a candidate that pops up here and there, though our outgoing (async) TCP requests are made of a persisted (keep-alive) TCP connection. So as the traffic grows, the number of available ephemeral ports should really only grow due to the incoming requests. And we know for sure that the problem only arises when we have outgoing requests enabled.
However, the problem may still arise due to that the port is allocated during a longer timeframe of the request. An outgoing request may take as long as 120 ms to execute (before the .NET Task (thread) is canceled) which might mean that the number of ports get allocated for a longer time period. Analyzing the Windows Performance Counter, verifies this assumption since the number of TCPv4.[Connection Established] goes from normal 2-3000 to peaks up to almost 12.000 in total when the problem occur.
We have verified that the configured maximum amount of TCP connections is set to the default of 16384. In this case, it may not be the problem, although we are dangerously close to the max limit.
When we try using netstat on the server it mostly returns without any output at all, also using TcpView shows very few items in the beginning. If we let TcpView run for a while it soon starts to show new (incoming) connections quite rapidly (say 25 connections/sec). Almost all connections are in TIME_WAIT state from the beginning, suggesting that they have already completed and waiting for clean up. Do those connections use ephemeral ports? The local port is always 80, and the remote port is increasing. We wanted to use TcpView in order to see the outgoing connections, but we can’t see them listed at all, which is very strange. Can’t these two tools handle the amount of connections we are having?
(To be continued.... But please fill in with info if you know it… )
Furhter more, as a side kick here. It was suggested in this blog post "ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0" that ServicePointManager.DefaultConnectionLimit should be set to int maxValue which otherwise could be a problem. But in .NET 4.5, this is the default already from the start.
UPDATE 19/2, 2013:
It is reasonable to assume that we did in fact hit the max limit of 16.384 ports. We doubled the number of ports on all but one server and only the old server would run into problem when we hit the old peak load of outgoing requests. So why did the TCP.v4.[Connections Established] never show us a higher number than ~12.000 at problem times? MY theory: Most likely, although not established as fact (yet), the Performance Counter TCPv4.[Connections Established] is not equivalent to the number of ports that are currently allocated. I have not had time to catch up on the TCP state studying yet, but I am guessing that there are more TCP states than what the "Connection Established" shows which would render the port as being ccupied. Though since we cannot use the "Connection Established" performance counter as a way to detect the danger of running out of ports, it is important that we find some other way of detecting when reaching this max port range. And as described in the text above, we are not able to use either with NetStat or the application TCPview for this on our production servers. This is a problem! (I'll write more about it in an upcoming response I think to this post)
The number of ports are restricted on windows to some maximum 65.535 (although the first ~1000 should probably not be used). But it should be possible to avoid the problem of running out of ports by decreasing the time for TCP state TIME_WAIT (default to 240 seconds) as described in numerous places.It should free up ports faster. I was first a bit hestitant about this doing this since we use both long running database queries as well as WCF calls on TCP and I wouldn't like to descrease the time constraint. Although not having caught up in my TCP state machine reading yet, I think it might not be a problem after all. The state TIME_WAIT, I think, is only there in order to allow for the handshake of a proper shut down to the client. So the actual data transfer on an existing TCP connection should not time out due to this time limit. Worse case scenario, the client is not shut down properly and it instead neads to time out. I guess all browsers may not be implementing this correctly and it could possibly be a problem on the client side only. Though I am guessing a bit here...
END UPDATE 19/2, 2013
UPDATE 24/4, 2013:
We have increased the number of port to to the maximum value. At the same time we do not get as many forwarded outgoing requests as earlier. These two in combination should be the reason why we have not had any incidents. However, it is only temporary since the number of outgoing requests are bound to increase again in the future on these servers. The problem thus lies in, I think, that port for the incoming requests has to remain open during the time frame for the response of the forwarded requests. In our application, this cancelation limit for these forwarded requests is 120 ms which could be compared with the normal <1ms to handle a non forwarded request. So in essence, I believe the definite number of ports is the major scalability bottleneck on such high throughput servers (>1000 requests/sec on ~16 cores machines) that we are using. This in combination with the GC work on cache reload (se below) makes the server especially vulernable.
END UPDATE 24/4
3) Too slow allocation of resources
Our performance counters show that the number of queued requests in the Thread Pool (1B) fluctuates a lot during the time of the problem. So potentially this means that we have a dynamic situation in which the queue length starts to oscillate due to changes in the environment. For instance, this would be the case if there are flooding protection mechanisms that are activated when traffic is flooding. As it is, we have a number of these mechanisms:
3.A) Web load balancer
When things go really bad and the server responds with a HTTP 503 error, the load balancer will automatically remove the web server from being active in production for a 15 second period. This means that the other servers will take the increased load during the time frame. During the “cooling period”, the server may finish serving its request and it will automatically be reinstated when the load balancer does its next ping. Of course this only is good as long as all servers don’t have a problem at once. Luckily, so far, we have not been in this situation.
3.B) Application specific valve
In the web application, we have our own constructed valve (Yes. It is a "valve". Not a "value") triggered by a Windows Performance Counter for Queued Requests in the thread pool. There is a thread, started in Application_Start, that checks this performance counter value each second. And if the value exceeds 2000, all outgoing traffic ceases to be initiated. The next second, if the queue value is below 2000, outgoing traffic starts again.
The strange thing here is that it has not helped us from reaching the error scenario since we don’t have much logging of this occurring. It may mean that when traffic hits us hard, things goes bad really quickly so that the 1 second time interval check actually is too high.
3.C) Thread pool slow increase (and decrease) of threads
There is another aspect of this as well. When there is a need for more threads in the application pool, these threads gets allocated very slowly. From what I read, 1-2 threads per second. This is so because it is expensive to create threads and since you don’t want too many threads anyways in order to avoid expensive context switching in the synchronous case, I think this is natural. However, it should also mean that if a sudden large burst of traffic hits us, the number of threads are not going to be near enough to satisfy the need in the asynchronous scenario and queuing of requests will start. This is a very likely problem candidate I think. One candidate solution may be then to increase the minimum amount of created threads in the ThreadPool. But I guess this may also effect performance of the synchronously running requests.
4) Memory problems
(Joey Reyes wrote about this here in a blog post)
Since objects get collected later for asynchronous requests (up to 120ms later in our case), memory problem can arise since objects can be promoted to generation 1 and the memory will not be recollected as often as it should. The increased pressure on the Garbage Collector may very well cause extended thread context switching to occur and further weaken capacity of the server.
However, we don’t see an increased GC- nor CPU usage during the time of the problem so we don’t think the suggested CPU throttling mechanism is a solution for us.
UPDATE 19/2, 2013: We use a cache swap mechanism at regular intervalls at which an (almost) full in-memory cache is reload into memory and the old cache can get garbage collected. At these times, the GC will have to work harder and steal resources from the normal request handling. Using Windows Performance counter for thread context switching it shows that the number of context switches decreases significantly from the normal high value at the time of a high GC usage. I think that during such cache reloads, the server is extra vulnernable for queueing up requests and it is necessary to reduce the footprint of the GC. One potential fix to the problem would be to just fill the cache without allocating memory all the time. A bit more work, but it should be doable.
UPDATE 24/4, 2013:
I am still in the middle of the cache reload memory tweak in order to avoid having the GC running as much. But we normally have some 1000 queued requests temporarily when the GC runs. Since it runs on all threads, it is naturall that it steals resources from the normal requests handling. I'll update this status once this tweak has been deployed and we can see a difference.
END UPDATE 24/4
I have implemented a reverse proxy through an Async Http Handler for benchmarking purposes (as a part of my Phd. Thesis) and run into the very same problems as you.
In order to scale it is mandatory to have processModel set to false and fine tune the thread pools. I have found that, contrary to what the documentation regarding processModel defaults says, many of the thread pools are not properly configured when processModel is set to true. The maxConnection setting it is also important as it limits your scalability if the limit is set too low. See http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
Regarding your app running out of ports because of the TIME_WAIT delay on the socket, I have also faced the same problem because I was injecting traffic from a limited set of machines with more than 64k requests in 240 seconds. I lowered the TIME_WAIT to 30 seconds without any problems.
I also mistakenly reused a proxy object to a Web Services endpoint in several threads. Although the proxy doesn't have any state, I found that the GC had a lot of problems collecting the memory associated with its internal buffers (String [] instances) and that caused my app to run out of memory.
Some interesting performance counters that you should monitor are the ones related to Queued requests, requests in execution and request time under the ASP.NET apps category. If you see queued requests or that the execution time is low but the clients see long request times, then you have some sort of contention in your server. Also monitor counters under the LocksAndThreads category looking for contention.
Since asynchronous requests hold up the tcp sockets for longer, maybe you need to look at
maxconnection property within connection management in your web.config?
Please refer to this link: http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
We faced similar problem and tuned this parameter to fix our issue. Maybe this will help you.
Edit: Also, lots of TIME_WAITs indicate a connection leak within the code based on past experience. Possible causes: 1) Not disposing connections used. 2) Incorrect implementation of connection pooling.
I was reading a comment about server architecture.
http://news.ycombinator.com/item?id=520077
In this comment, the person says 3 things:
The event loop, time and again, has been shown to truly shine for a high number of low activity connections.
In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop.
On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.
Are any of these true?
And also another article here titled "Why Events Are A Bad Idea (for High-concurrency Servers)"
http://www.usenix.org/events/hotos03/tech/vonbehren.html
Typically, if the application is expected to handle million of connections, you can combine multi-threaded paradigm with event-based.
First, spawn as N threads where N == number of cores/processors on your machine. Each thread will have a list of asynchronous sockets that it's supposed to handle.
Then, for each new connection from the acceptor, "load-balance" the new socket to the thread with the fewest socket.
Within each thread, use event-based model for all the sockets, so that each thread can actually handle multiple sockets "simultaneously."
With this approach,
You never spawn a million threads. You just have as many as as your system can handle.
You utilize event-based on multicore as opposed to a single core.
Not sure what you mean by "low activity", but I believe the major factor would be how much you actually need to do to handle each request. Assuming a single-threaded event-loop, no other clients would get their requests handled while you handled the current request. If you need to do a lot of stuff to handle each request ("lots" meaning something that takes significant CPU and/or time), and assuming your machine actually is able to multitask efficiently (that taking time does not mean waiting for a shared resource, like a single CPU machine or similar), you would get better performance by multitasking. Multitasking could be a multithreaded blocking model, but it could also be a single-tasking event loop collecting incoming requests, farming them out to a multithreaded worker factory that would handle those in turn (through multitasking) and sending you a response ASAP.
I don't believe slow connections with the clients matter that much, as I would believe the OS would handle that efficiently outside of your app (assuming you do not block the event-loop for multiple roundtrips with the client that initially initiated the request), but I haven't tested this myself.