nginx: Take action on proxy_pass response headers - nginx

I'd like to use nginx as a front-end proxy, but then have it conditionally proxy to another URL depending on the MIME type (Content-Type header) of the response.
For instance, suppose 1% of my clients are using a User-Agent that doesn't handle PNGs. For that UA, if the response is of type, image/png, I want to proxy_pass again to a special URL that'll get the PNG and convert it for me.
Ideally I'd do this without hurting performance and caching for the 99% of users that don't need this special handling. I can't modify the backend application. (Otherwise I could have it detect the UA and fix the response, or send an X-Accel-Redirect to get nginx to run another location block.)
If this isn't possible or has bad performance, where would I look to start writing a module to achieve the desired effect? As in, which extension point gets me closest to implementing this logic?
Edit: It seems like I could use Lua to perform a subrequest then inspect the response headers there. But that'd mean passing every request through Lua which seems suboptimal

Although I'm sure there could be valid reasons to do what you want to do, your actual image/png example is not as straightforward as it may look. Browsers no longer include image/png in their Accept HTTP request headers like they've used to in the old days when png was new, so, you'd have to have the whole detection and mapping tables of the really-really old browsers.
Additionally, from the architecture perspective, it's not very clear what you are trying to accomplish.
If this is static data, then why are you proxying it to the backend in the first place, instead of serving the static data directly from nginx? Will the \.png$ regex not match the affected request URIs? Couldn't you solve this without involving a backend, or even by rewriting the request without sending the wrong one to the backend first?
If this is really dynamic, then why do you need to waste the time for making a request only to receive a reply of an unacceptable type, instead of having the special-case mapping tables based on your knowledge of how the app works, and bypassing the needless requests from the start, instead of discarding them later on?
If the app is truly a black box, and you require a general purpose solution that'll work for any app, then it's still unclear what the usecase is, and why the extra requests have to be made, only to be discarded.
If you're really looking to mess with only 1% of your traffic as in your image/png example, then perhaps it might make sense to redirect all requests from the affected 1% of the old browsers to a separate backend, which would have the logic to do what you require.
Frankly, if you want to target the really-really old browsers, then I think png support should be the last of your worries. Many webapps include very complex and special-purpose JavaScript that wouldn't even work in new alternative browsers with new User-Agent strings, let alone any old webbrowsers that didn't even have png support.

So according to http://wiki.nginx.org/HttpCoreModule#.24http_HEADER. I assume you can have something along the lines of
if ($content_type ~ 'whatever your content type') {
proxy_pass 'ur_url'
}
Would that be something that works for you?

Related

Always serve stale/cached data from edge servers

Is it possible to serve always stale/cached data from CDN edge servers like Akamai. ?
Reason is if there is some problem in origin server and It might need 2-3 days to solve it.My origin server responds properly but I don’t want it to get overloaded and want CDN to keep serving the cached data instead for sometime.
Best Regards,
Saurav
Yes, Akamai can serve stale content if the request to the origin times out or produces an error code. Here's a screen shot of the "Caching" and "Cache HTTP Error Responses" behaviors.
Note, however, that your content will need to be fairly popular to remain in cache. If it's not popular, then it may be evicted before you're able to repair your origin.
A better alternative is to implement a Site Failover ruleset which allows you to serve your page with alternate content from a separate origin, or static assets from Akamai's NetStorage. Here's a screenshot of a typical Match of a failed origin and the standard Fail Over behavior.
The "Action" field provides the following options, which can each be configured to your needs:
Serve stale content
Redirect to a different location
Use alternate hostname in this property
Use alternate hostname on provider network
Serve alternate content from NetStorage

head request returns different content-type [duplicate]

I would like to try send requests.get to this website:
requests.get('https://rent.591.com.tw')
and I always get
<Response [404]>
I knew this is a common problem and tried different way but still failed.
but all of other website is ok.
any suggestion?
Webservers are black boxes. They are permitted to return any valid HTTP response, based on your request, the time of day, the phase of the moon, or any other criteria they pick. If another HTTP client gets a different response, consistently, try to figure out what the differences are in the request that Python sends and the request the other client sends.
That means you need to:
Record all aspects of the working request
Record all aspects of the failing request
Try out what changes you can make to make the failing request more like the working request, and minimise those changes.
I usually point my requests to a http://httpbin.org endpoint, have it record the request, and then experiment.
For requests, there are several headers that are set automatically, and many of these you would not normally expect to have to change:
Host; this must be set to the hostname you are contacting, so that it can properly multi-host different sites. requests sets this one.
Content-Length and Content-Type, for POST requests, are usually set from the arguments you pass to requests. If these don't match, alter the arguments you pass in to requests (but watch out with multipart/* requests, which use a generated boundary recorded in the Content-Type header; leave generating that to requests).
Connection: leave this to the client to manage
Cookies: these are often set on an initial GET request, or after first logging into the site. Make sure you capture cookies with a requests.Session() object and that you are logged in (supplied credentials the same way the browser did).
Everything else is fair game but if requests has set a default value, then more often than not those defaults are not the issue. That said, I usually start with the User-Agent header and work my way up from there.
In this case, the site is filtering on the user agent, it looks like they are blacklisting Python, setting it to almost any other value already works:
>>> requests.get('https://rent.591.com.tw', headers={'User-Agent': 'Custom'})
<Response [200]>
Next, you need to take into account that requests is not a browser. requests is only a HTTP client, a browser does much, much more. A browser parses HTML for additional resources such as images, fonts, styling and scripts, loads those additional resources too, and executes scripts. Scripts can then alter what the browser displays and load additional resources. If your requests results don't match what you see in the browser, but the initial request the browser makes matches, then you'll need to figure out what other resources the browser has loaded and make additional requests with requests as needed. If all else fails, use a project like requests-html, which lets you run a URL through an actual, headless Chromium browser.
The site you are trying to contact makes an additional AJAX request to https://rent.591.com.tw/home/search/rsList?is_new_list=1&type=1&kind=0&searchtype=1&region=1, take that into account if you are trying to scrape data from this site.
Next, well-built sites will use security best-practices such as CSRF tokens, which require you to make requests in the right order (e.g. a GET request to retrieve a form before a POST to the handler) and handle cookies or otherwise extract the extra information a server expects to be passed from one request to another.
Last but not least, if a site is blocking scripts from making requests, they probably are either trying to enforce terms of service that prohibit scraping, or because they have an API they rather have you use. Check for either, and take into consideration that you might be blocked more effectively if you continue to scrape the site anyway.
One thing to note: I was using requests.get() to do some webscraping off of links I was reading from a file. What I didn't realise was that the links had a newline character (\n) when I read each line from the file.
If you're getting multiple links from a file instead of a Python data type like a string, make sure to strip any \r or \n characters before you call requests.get("your link"). In my case, I used
with open("filepath", 'w') as file:
links = file.read().splitlines()
for link in links:
response = requests.get(link)
In my case this was due to fact that the website address was recently changed, and I was provided the old website address. At least this changed the status code from 404 to 500, which, I think, is progress :)

HTTP Response before Request

My question might sound stupid, but I just wanted to be sure:
Is it possible to send an HTTP response before having the request for that resource?
Say for example you have an HTML page index.html that only shows a picture called img.jpg.
Now, if your server knows that a visitor will request the HTML file and then the jpg image every time:
Would it be possible for the server to send the image just after the HTML file to save time?
I know that HTTP is a synchronous protocol, so in theory it should not work, but I just wanted someone to confirm it (or not).
A recent post by Jacques Mattheij, referencing your very question, claims that although HTTP was designed as a synchronous protocol, the implementation was not. In practise the browser (he doesn't specify which exactly) accepts answers to requests have not been sent yet.
On the other hand, if you are looking to something less hacky, you could have a look at :
push techniques that allows the server to send content to the browser. The modern implementation that replace long-polling/Comet "hacks" are the websockets. You may want to have a look at socket.io also.
Alternatively you may want to have a look at client-side routing. Some implementations combine this with caching techniques (like in derby.js I believe).
If someone requests /index.html and you send two responses (one for /index.html and the other for /img.jpg), how do you know the recipient will get the two responses and know what to do with them before the second request goes in?
The problem is not really with the sending. The problem is with the receiver possibly getting unexpected data.
One other issue is that you're denying the client the ability to use HTTP caching tools like If-Modified-Since and If-None-Match (i.e. the client might not want /img.jpg to be sent because it already has a cached copy).
That said, you can approximate the server-push benefits by using Comet techniques. But that is much more involved than simply anticipating incoming HTTP requests.
You'll get a better result by caching resources effectively, i.e. setting proper cache headers and configuring your web server for caching. You can also inline images using base 64 encoding, if that's a specific concern.
You can also look at long polling javascript solutions.
You're looking for server push: it isn't available in HTTP. Protocols like SPDY have it, but you're out of luck if you're restricted to HTTP.
I don't think it is possible to mix .html and image in the same HTTP response. As for sending image data 'immediately', right after the first request - there is a concept of 'static resources' which could be of help (but it will require client to create a new reqest for a specific resource).
There are couple of interesting things mentioned in the the article.
No it is not possible.
The first line of the request holds the resource being requested so you wouldn't know what to respond with unless you examined the bytes (at least one line's worth) of the request first.
No. HTTP is defined as a request/response protocol. One request: one response. Anything else is not HTTP, it is something else, and you would have to specify it properly and implement it completely at both ends.

How to change Server Name in HTTP Response Header?

Is it possible to change your server name in HTTP Response Headers from nginx to something else. I want to do it to confuse prying eyes and enhanced security.
You will need to go into the core code, find where this is, change this and recompile Nginx.
Not worth the trouble really.
There is the server tokens directive that will hide the version number. http://wiki.nginx.org/HttpCoreModule#server_tokens.
Not much use in terms of security either really but at least not so much trouble to achieve.

Are there any safe assumptions to make about the availability of a URL?

I am trying to determine if there is a way to check the availability of a potentially large list of urls (> 1000000) without having to send a GET request to every single one.
Is it safe to assume that if http://www.example.com is inaccessible (as in unable to connect to server or the DNS request for the domain fails), or I get a 4XX or 5XX response, then anything from that domain will also be inaccessible (e.g. http://www.example.com/some/path/to/a/resource/named/whatever.jpg)? Would a 302 response (say for whatever.jpg) be enough to invalidate the first assumption? I imagine sub domains should be considered distinct as http://subdomain.example.com and http://www.example.com may not direct to the same ip?
I seem to be able to think of a counter example for each shortcut I come up with. Should I just bite the bullet and send out GET requests to every URL?
Unfortunately, no you cannot infer anything from 4xx or 5xx or any other codes.
Those codes are for individual pages, not for the server. It's quite possible that one page is down and another is up, or one has a 500 server-side error and another doesn't.
What you can do is use HEAD instead of GET. That retrieves the MIME header for the page but not the page content. This saves time server-side (because it doesn't have to render the page) and for yourself (because you don't have to buffer and then discard content).
Also I suggest you use keep-alive to accelerate responses from the same server. Many HTTP client libraries will do this for you.
A failed DNS lookup for a host (e.g. www.example.com) should be enough to invalidate all URLs for that host. Subdomains or other hosts would have to be checked separately though.
A 4xx code might tell you that a particular page isn't available, but you couldn't make any assumptions about other pages from that.
A 5xx code really won't tell you anything. For example, it could be that the page is there, but the server is just too busy at the moment. If you try it again later it might work fine.
The only assumption you should make about the availability of an URL is that "Getting an URL can and will fail".
It's not safe to assume that a sub domain request will fail when a parent one does. Namely because inbetween your two requests your network connection can go up, down or generally misbehave. It's also possible for the domains to be changed in between requests.
Ignoring all internet connection issues. You are still dealing with a live web site that can and will change constantly. What is true now might not be true in 5 minutes when they decide to alter their page structure or change the way the display a particular page. Your best bet is to assume any get will fail.
This may seem like an extreme view point. But these events will happen. How you handle them will determine the robustness of your program.
First don't assume anything based on a single page failing. I have seen many cases where IIS will continue to serve static content but not be able to serve any dynamic content.
You have to treat each host name as unique you cannot assume subdomain.example.com and example.com point to the same IP. Or even if they do there is no guarentee that are the same site. IIS again has host headers that allows you to run multiple sites using a single IP Address.
If the connection to the server actually fails, then there's no reason to check URLs on that server. Otherwise, you can't assume anything.
In addition to what everyone else is saying, use HEAD requests instead of GET requests. They function the same, but the response doesn't contain the message body, so you save everyone some bandwidth.

Resources