I'm using nginx as a reverse proxy server.
My application servers behind it accept custom extension methods for requests.
For example, "MYMETHOD".
However the nginx default configuration seems to only accept non-extension methods, such as HEAD, GET, POST, etc, and returns a default nginx 400 response for requests that did not have a non-extension method, instead of proxying those requests to my app servers.
How can I make nginx accept and proxy any http requests regardless of their method?
I do not want to whitelist specific methods, because this would require me to change the nginx configuration every time I need to support a new method in my app servers, and I do not want those to be tightly coupled.
[edit]
The solution has to work for official supported nginx distributions, either on nginx.com or popular linux distributions (debian, centos, etc).
Obviously I can just alter the nginx source code and make it pass along any methods, but if I'm altering the source code and recompiling it's no longer nginx but rather a fork of it.
You can use ngx_http_allow_methods_module for allowing custom HTTP methods. This module allows arbitrary HTTP methods to be passed to a backend application.
Example:
http {
server {
location /api/ {
allow_methods ".*";
}
}
}
Directive
allow_methods "^(GET|POST|PUT|DELETE|PATCH|LINK|COPY)$";
This directive describes HTTP methods that should be passed along. The pattern is case-sensitive (as per RFC 2616). If it is absent, the default rules of Nginx apply. You should use this directive only in the locations that you really need it in.
Related
I'm using Nginx as a reverse proxy for a Ruby on Rails application.
The application has 2 critical endpoints that are responsible for capturing data from customers who're registering their details with our service. These endpoints take POST data from a form that may or may not be hosted on our website.
When our application goes down for maintenance (rare, but we have a couple of SPOF services), I would like to ensure the POST data is captured so we don't lose it forever.
Nginx seems like a good place to do this given that it's already responsible for serving requests to the upstream Rails application, and has a custom vhost configuration in place that serves a static page for when we enable maintenance mode. I figured this might be a good place for additional logic to store these incoming POST requests.
The issue I'm having is that Nginx doesn't parse POST data unless you're pointing it at an upstream server. In the case of our maintenance configuration, we're not; we're just rendering a maintenance page. This means that $request_body¹ is empty. We could perhaps get around this by faking a proxy server, or maybe even pointing Nginx at itself and enabling the logger on a particular location. This seems hacky though.
Am I going about this the wrong way? I've done some research and haven't found a canonical way to solve this use-case. Should I be using a third-party tool and not Nginx?
1: from ngx_http_core_module: "The variable’s value is made available in locations processed by the proxy_pass, fastcgi_pass, uwsgi_pass, and scgi_pass directives when the request body was read to a memory buffer."
I have a web application running on a server (let's say on localhost:8000) behind a reverse proxy on that same server (on myserver.example:80). Because of the way the reverse proxy works, the application sees an incoming request targeted at localhost:8000 and the framework I'm using therefore tries to generate absolute URLs that look like localhost:8000/some/ressource instead of myserver.example/some/ressource.
What would be "the correct way" of generating an absolute URL (namely, determining what hostname to use) from behind a proxy server like that? The specific proxy server, framework and language don't matter, I mean this more in an HTTP sense.
From my initial research:
RFC7230 explicitly says that proxies MUST change the Host header when passing the request along to make it look like the request came from them, so it would look like using Host to determine what hostname to use for the URL, yet in most places where I have looked, the general advice seems to be to configure your reverse proxy to not change the Host header (counter to the spec) when passing the request along.
RFC7230 also says that "request URI reconstruction" should use the following fields in order to find what "authority component" to use, though that seems to also only apply from the point-of-view of the agent that emitted that request, such as the proxy:
Fixed URI authority component from the server or outbound gateway config
The authority component from the request's firsr line if it's a complete URI instead of a path
The Host header if it's present and not empty
The listening address or hostname, alongside with the incoming port number if it's not the default one for the protocol
HTTP 1.0 didn't have a Host header at all, and that header was added for routing purposes, not for URL authority resolution.
There are headers that are made specifically to let proxies to send the old value of Host after routing, such as Via, Forwarded and the unofficial X-Forwarded-Host, which some servers and frameworks will check, but not all, and it's unclear which one should even take priority given how there's 3 of them.
EDIT: I also don't know whether HTTPS would work differently in that regard, given that the headers are part of the encrypted payload and routing has to be performed another way because of this.
In general I find it’s best to set the real host and port explicitly in the application rather than try to guess these from the incoming request.
So for example Jira allows you to set the Base URL through which Jira will be accessed (which may be different to the one that it is actually run as). This means you can have Jira running on port 8080 and have Apache or Nginx in front of it (on the same or even a different server) on port 80 and 443.
We are using nginx for load balancing and handling SSL of an API. Requests are forwarded to Tomcat instances. Since Tomcat does not use SSL, all hyperlinks that are provided by Tomcat use http rather than https.
We use module ngx_http_sub_module to modify all hyperlinks in the response body and replace http by https. This is working already.
However, all hyperlinks in the response header, for example in the Location or Link headers are not replaced.
Is there any other module that can be used for this purpose?
See the proxy_redirect directive. For more complicated proxying, getting this setting correct gets difficult, but for simpler cases their provided examples should prove illuminating.
I still haven't found a way to handle Link: headers reliably.
Seems most web server support subrequest.
I found here's one question about sub request:
Subrequest for PHP-CGI
But what is the point of sub request at all,when is that kind of thing really useful?
Is it defined in http protocol?
Apache subrequests can be used in your e.g. PHP application with virtual() to access resources from the same server. The resource gets processed from Apache normally as normal requests are, but you don't have the overhead of sending a full HTTP request on the network interface.
Less overhead is probably the only reason one would want to use it instead of a real HTTP request.
Edit: The resources are processed by apache, which means that the apache modules are used if configured. You can request a mod_perl or mod_ruby processed resource from PHP.
I'm thinking of a web app that uses CouchDB extensively, to the point where there would be great gains from serving with the native erlang HTTP API as much as possible.
Can you configure Apache as a reverse proxy to allow outside GETs to be proxied directly to CouchDB, whereas PUT/POST are sent to the application internal logic (for sanitation, authentication...)? Or is this unwise -- the CouchDB built-in authentication options just seem a little weak for a Web App.
Thanks
You can use mod_rewrite to selectively proxy requests based on the HTTP method.
For example:
# Send all GET and HEAD requests to CouchDB
RewriteCond %{REQUEST_METHOD} GET|HEAD
RewriteRule /db/(.*) http://localhost:5984/mydb/_design/myapp/$1 [P]
# Correct all outgoing Location headers
ProxyPassReverse /db/ http://localhost:5984/mydb/_design/myapp/
Any POST, PUT, or DELETE requests will be handled by Apache as usual, so you can wire up your application tier however you usually would.
Your question is aging without answers, so I'll add this "almost answer".
Nginx can definitely redirect differently based on requests.
This is, if you are ready to place nginx in the front as the revproxy and place apache and couchdb both as backends.
Did you see this? OAuth and cookie authentication were checked in on the 4th:
http://github.com/halorgium/couchdb/commit/335af7d2a9ce986f0fafa4ddac7fc1a9d43a8678
Also, if you're at all interested in using Erlang as the server language, you could proxy couchdb through webmachine:
http://blog.beerriot.com/2009/05/18/couchdb-proxy-webmachine-resource/
I would consider using the reverse proxy feature of Apache mod_proxy. Create a virtual host configuraton that forwards certain HTTP requests of the web server to CouchDB. You can setup rules on which URI paths that should be forwarded etc.
See this guide for inspiration: http://macgyverdev.blogspot.se/2014/02/apache-web-server-as-reverse-proxy-and.html