I have a site using asp.net and WebAPI deployed on IIS 7.5. Normal site usage is steady but occasionally there are bursty spikes with very high usage and users get very slow response timeouts. There is no option to improve performance at this time and the client has asked for a quick fix involving a customised 503 error page with the same look as the rest of the site.
The single IIS server sits behind a load balancer and there is the option of adding a 2nd server purely to serve a static error page when the site is under heavy load. There has been a suggestion of a solution where slow responses trigger all subsequent requests go to the static server and get the custom 503 error page until the usage subsides.
As far as I can see its not possible to customise the default 503 error page that is served automatically by IIS when its request queue is full.
I can't see anything on SO about how to solve this sort of issue - can anyone help?
Load balancers use probes to check if a node is operational. One possible solution is to make your own probe that checks if load on a node has reached defined threshold. If so then the probe (your probe) would return HTTP 500 to load balancer. If load balancer receives this error code from probe then it will not forward requests to this node. When threshold is not reached then the probe would return HTTP 200.
Related
I'm using Nginx as a reverse proxy for a Ruby on Rails application.
The application has 2 critical endpoints that are responsible for capturing data from customers who're registering their details with our service. These endpoints take POST data from a form that may or may not be hosted on our website.
When our application goes down for maintenance (rare, but we have a couple of SPOF services), I would like to ensure the POST data is captured so we don't lose it forever.
Nginx seems like a good place to do this given that it's already responsible for serving requests to the upstream Rails application, and has a custom vhost configuration in place that serves a static page for when we enable maintenance mode. I figured this might be a good place for additional logic to store these incoming POST requests.
The issue I'm having is that Nginx doesn't parse POST data unless you're pointing it at an upstream server. In the case of our maintenance configuration, we're not; we're just rendering a maintenance page. This means that $request_body¹ is empty. We could perhaps get around this by faking a proxy server, or maybe even pointing Nginx at itself and enabling the logger on a particular location. This seems hacky though.
Am I going about this the wrong way? I've done some research and haven't found a canonical way to solve this use-case. Should I be using a third-party tool and not Nginx?
1: from ngx_http_core_module: "The variable’s value is made available in locations processed by the proxy_pass, fastcgi_pass, uwsgi_pass, and scgi_pass directives when the request body was read to a memory buffer."
I have a setup of the website that powered by 2 web applications under the hood.
One application (fast) is supposed to handle catalog pages.
Another application (slow) is supposed to handle customer/cart/checkout pages.
Both applications should run on the same host:
example.com:80 (fast) and example.com:8000 (slow)
Of course, port 8000 is not exposed for the visitor and used internally by nginx.
I want that web requests reach slow application only if fast application returned specific response header, for example X-catalog-not-found.
The expected result is following:
all requests go to fast application example.com:80
if fast application found a product by uri - it renders the page
if fast application did not find a product by uri - it sends empty body and response header X-catalog-not-found
based on the header received in prev step, nginx performs proxy pass to slow application example.com:8000
I feel that ngx_http_proxy_module or/and nginx_upstream module should be used, but haven't found a working solution after reading the docs.
I have a few websites that use to be on a server using iis7 as the host. I have moved these websites to a new server that is IIS8 and the database has been upgraded from SQL 2005 to SQL 2014.
Another part of this change is it now runs through a DMZ reverse proxy that redirects to an internal server.
This works fine in Chrome or Edge. But Firefox and IE I get a 500 URL Rewrite Module Error. Not much more information in the error other than that.
I have other sites on the reverse proxy that work with no issue. But all of the ones that work are .net 4.0 or higher. The sites I am having issue with are both 3.5 framework.
I have tried setting the app pool framework on the dmz to match the internal server.
There are currently 2 inbound rules one converts http to https and the other is the proxy rule. There is 1 Outbound rule which is also part of the revers proxy. The reverse proxy currently takes the https traffic and uses http internally and then the outbound sends it back as https. This is that same on all of the site on this server that currently work without any issues.
Some more information. I turned of error tracing and the fuller error I received is
Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ("gzip").
This is because the responses that are coming from the back end server are using HTTP Compression, and URL rewrite cannot modify a response that is already compressed. This causes a processing error for the outbound rule resulting in the 500.52 status code.
There are two ways to work around this: either you turn off compression on the backend server that is delivering the HTTP responses (which may or may not be possible, depending on your configuration), or we attempt to indicate to the backend server the client does not accept compressed responses by removing the header when the request comes into the IIS reverse proxy and by placing it back when the response leaves the IIS server
There are a number of step needed to complete this fix you can find them and all the information you need at https://blogs.msdn.microsoft.com/friis/2016/08/25/iis-with-url-rewrite-as-a-reverse-proxy-part-2-dealing-with-500-52-status-codes/
It is a 3 part post and the second post in the series was the solution.
If a website depends on an upstream database or other abstracted service or store - basically most websites known to man - then when the upstream requests dies with a timeout, should I return a 503 or a 504?
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down
for maintenance). Generally, this is a temporary state. Sometimes,
this can be permanent as well on test servers.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a
timely response from the upstream server.
The 504 feels more designed for proxy servers, caches or other web infrastructure, but the 503 is not right either since the service is fine, the current request just happened to die, perhaps a search might have been to broad or something.
So which is 'right' according to HTTP?
Luke
503 sounds appropriate if this is a temporary condition that will be resolved simply by waiting. http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html states: "The implication is that this is a temporary condition which will be alleviated after some delay."
500 also sounds appropriate. The RFC states: "The server encountered an unexpected condition which prevented it from fulfilling the request." An unresponsive database is an exceptional/unexpected case.
IMO, what it comes down to is this: Are you providing an error code that will help callers (i.e. HTTP clients) respond to the situation? In this case, there is really nothing a client can do other than to try again later. Given this, I would keep it simple and return 500. I think clients are more likely to care if the site is available and less likely to care about the specific reason. Plus fewer response codes makes it easier to code clients. Again this is just my opinion.
I'm trying to understand the best way to handle SOA on heroku, i've got it into my head that making requests to custom domains will somehow be slower, or would all requests go "out" via the internet?
On previous projects which are SOA in nature we've had dedicated hosting so could make requests like http://blogs/ (obviously on the internal network) I'm wondering if heroku treats *.herokuapp.com requests as "internal"... Or is it clever enough to know the myapp.com is actually myapp.herokuapp.com and route locally, or am i missing the point completely, and in fact all requests are "external"
What you are asking about is general knowledge of how internet requests are working.
Whenever you do request from your application to lets say example.com, domain name will first be translated into IP address using so called DNS servers.
So this how it works: does not matter you request myapp.com or myapp.heroku.com you will always request infromation from specific IP address, and domain name you have requested will be passed as part of request headers.
Server which receives this request will try to find in its internal records this domain name and handle request accordingly.
So conclusion is that does not matter you put myapp.com or myapp.heroku.com, the speed of request will always be same.
PS: As heroku will load balance your requests between different instances of your running myapp.com, the speed here will depend on several factors: how quickly your application will respond, how many instances you have running and load average per instance, how much is load balancer loaded at the moment. But surely it will not depend on which domain name you use.