I have a scenario when server needs to do authorization request before an actual request. so, one request are served by 2 different services.
Nginx's location has to be handled by Auth-Service, and if the response status is 200 OK, then the request should be forwarded to the Feature-Service. Otherwise, if the response status 401 then this status should be replied to front-end.
upstream auth_service {
server localhost:8180;
}
upstream feature_service {
server localhost:8080;
}
location /authAndDo {
# suggest here
}
Code snippet in nginscript will be also OK.
Specifically for this purpose, http://nginx.org/r/auth_request exists through http://nginx.org/docs/http/ngx_http_auth_request_module.html (not built by default).
It lets you put authentication, through a subrequest, into any location you want, effectively separating the authentication from the actual resource.
In general, such not possible from web server. 401 is a response at front end plus gives HTTP WWW-Authenticate response header. Develop web application according to need or edit 401 file. HTTP 401 has RFC specification. Users, browsers should understand the message. Nginx doc described how 401 will be handled.
Nginx community edition's auth_request will only process if the subrequest returns HTTP 200, else for 401 will not redirect more than to 401 by default, other headers will not be process the response to protect the application & the users. Nginx community edition not even support all features of HTTP/2. It can go worser.
Apache2 web server has full HTTP/2 support and custom 401 location in auth module and works only on few browsers. Few browsers allow Apache2 to do that perfectly. Others show fail to load page. On Stack Exchange networks's various subdomains peoples asked before for Apache2 to make it working for all the browsers.
Hardly you can redirect on Nginx :
error_page 401 /401.html;
location ~ (401.html)$ {
alias /usr/share/nginx/html/$1;
}
Another way may be using reverse proxy with another server like peoples talking here on Github. I can not give warranty of failure of loading page.
Related
I've used several revers proxies over time, but NGinx blew me away with its behavior. I recently had to use NGinx after years of using HAProxy (as K8s Ingress) and I'm stuck with no solution in sight.
Right from the beginning, the behaviour was different when the session was lost. An HTTP 401 would tell the client (single page JavaScript application) to inform the user that the session was lost and he/she has to login again. Instead of sending 401 to the browser, it sends a 307 to the login page, with the wrong verb (POST) since the request that fails with 401 was a POST request.
The best way to troubleshoot would be in isolation so I installed the version from the repository (Linux Mint 20) and registered a simple reverse proxy entry in /etc/nginx/conf.d
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
Unfortunately (or fortunately) the issue manifested itself right away: as soon as I removed the cookies and the server sends 401 "Please log in", the browser shows a 307 on the request that failed with 401.
Expected behavior:
Browser sends "POST /ping"
Browser receives "401 on /ping"
Current behavior:
Browser sends "POST /ping"
Browser receives "307 /auth" and then executes /auth with POST (not sure how it knows /auth is the login page)
Any idea how to disable this behavior?
when i post the data using POST method in woocommerce api. i am getting cors issue
Access to fetch at 'http://localhost/wordpress/wc-api/v3/customers?oauth_consumer_key=ck_64d88e1fa3516e9f5a06b6053f02976a534d3f8f&oauth_nonce=zsu3ysEnFHhvrZt4Nc7H66Dgu28H20K7&oauth_signature_method=HMAC-SHA256&oauth_timestamp=1562587817&oauth_version=1.0&oauth_signature=KtFxvyQNklUlfCi6rNWyJ0DEJ6AS2ZbwbO44u%2FEqxG4%3D' from origin 'http://localhost:8100' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.
You have to set a Access-Control-Allow-Origin header on each request to the server, if your server is on a different domain than the app on which you are making those requests (the server sets it as a response header). Adding that header tells the system that the external domain "localhost:8100" is allowed to make those requests.
You cannot circumvent this requirement in vanilla browsers, because it is a built in security feature to reduce CORS attacks
PS. different ports on the same domain are considered to be different domains. Thus example.com will get a 401 error, if you are making a request to example.com:8100. Same goes for localhost, or any other domain.
Example code from an Apache2 web server .conf file, that I personally use to set these headers.
SetEnvIf Origin "^http(s)?://(.+\.)?(staging.\xxx\.com|xxx\.com|xxx\.local|xxx\.local:4200|a2\.local)$" origin_is=$0
Header always set Access-Control-Allow-Origin %{origin_is}e env=origin_is
Just replace the xxx.com domains with localhost:8100 or whatever else you need in that array. (if you are using Apache web server)
As a result, the Chrome network tab should have an Access-Control-Allow-Origin header on attached to the request
Firebase fails to redirect any HTTP POST requests to HTTPS POST. EX:
POST / HTTP/1.1
Host: apis.mydomain.com
is redirected by Firebase NGINX to
GET / HTTP/1.1
Host: apis.mydomain.com
if you are explicit about https, then NGINX works properly: POST -> POST
So, when the request hits Firebase hosting, and redirects your request to a firebase function that can be accessed by an https endpoint, the method has the possibility of collapsing into a GET from a POST method.
Looking closer at the headers, inside the Firebase Cloud Function, the protocol always expresses as http, instead of https.
I'm assuming this is an internal issue that I cannot modify, however, this is an issue for what I am doing, and this definitely is a problem given I cannot modify the NGINX that is handling my http(s) requests.
If you redirect with a 301 or 302 status code, the POST is downgraded to GET.
You need to use a 307 status to maintain POST across the redirect. See this document for details.
On Nginx, you will need to use a return statement. For example:
return 307 https://$host$request_uri;
I'm trying to determine what a client should do with headers on receiving a 303 (See Other) from the server. Specifically, what should be done with the Authorization header that was sent on the initial request?
Here's the problem: the client makes a request to myserver.com (HTTP request method is not relevant here) and the server at myserver.com responds with a 303 and the Location header contains otherserver.com/some_resource/. Tools like Postman and curl will follow the redirect by passing all the same headers in the subsequent request to otherserver.com. I haven't found a way to make these tools drop the headers.
In the case I've described, sending the Authorization header to otherserver.com seems like a security risk: otherserver.com now knows my token and possibly what host it can be used on so now the token is compromised. This can also cause errors, depending on how the destination host is configured. In the case where the redirect is to another resources on the same host (ie, myserver.com) then the Authorization header will (probably) need to be sent, and because it's the same host nothing is compromised.
Effectively, in different situations it seems that the correct behaviour is different. The relevant section in the RFC does not address this issue. In developing my own API, I've written documentation telling API clients to drop the Authorization header on redirect to otherserver.com. However, based on mucking around with curl and Postman, it's not clear to me either (a) what the default behaviour is for a typical HTTP client library or (b) whether HTTP client libraries permit easy modification of the HTTP headers before following a 303 redirect. As a result, it's possible my suggestion isn't practical. I also know of no way for the server to instruct the client as to what it should do with headers on following the 303 redirect.
What should a HTTP client do with the headers when it follows a 303 redirect? Who is responsible for deciding whether to use the same headers on the redirect, the HTTP client or server?
You can argue that when sending the 303 with otherserver.com's Location, myserver.com trusted otherserver.com to handle your token. It could have sent the token in the background as well. From the client's perspective, the client trusts myserver.com to handle the token, store and verify it securely, etc. If myserver.com decides to send it on to otherserver.com, should the client override? In this case it can of course, but in general I don't think it should.
As an attacker does not control the response headers from myserver.com which is a legit resource, I think in general it is secure to send the token by default to the other server it specifies, maybe unless you have some good reason not to (say an explicit policy on the client).
So I am faced with a question and I believe it isn't possible but here it goes.
I have a Netscaler Load Balancing 2 web servers via a Load Balance VIP bound to a Content Switch serving up other web servers.
I have a rewrite policy that inserts HSTS into responses for 200 and this works without issue. When the back end servers go down we logically send a 503 to the client but the client wants to include HSTS in this response.
Is this at all possible? RFC6797 for HSTS describes HSTS for a serving web site and in this case the 503 is generated by the NetScaler but I wanted to confirm this is not a possiblity.
Any help is appreciated.
Yes this is possible. i.e use a content switch. bind your web application as policy with highest priority (lower value == higher pri.).
for instance policy 10 evals to true always. policy 10 points to your web server. policy 20 will then never be processed. if you web server goes down (referenced by policy 10) policy 10 will suddenly no longer be evaluated. Policy 20 will now be processed..
in policy 20 you put a responder policy of type "respond with" add your own friendly error message as raw http data and include hsts in the http header.. Your loadbalancer will only respond with 503 if it has no policy to process.. if you like you can respond with a 503 message in policy 20 aswell but i would reccommend to create a proper "down page" instead of a 503.
If you are generating your 503 page using a responder action, you will have to add the HSTS header to your response manually. Responder actions are annoying that way - they shortcut most of the output path that normal backend responses go though. The Netscaler simply returns the string verbatim without examining or changing the contents. So a respondwith action returning this expression would add your header:
"HTTP/1.1 503 Service Unavailable\r\n"+
"Content-Type: text/html;charset=utf-8\r\n"+
"Strict-Transport-Security: max-age=31536000\r\n"+
"\r\n"+
"<!doctype html><html><body>
Backend server for " + HTTP.REQ.HOSTNAME + HTTP.REQ.URL.HTTP_URL_SAFE + " is not responding.
</body></html>"