Nginx Rewrite rule not working for POST Request - nginx

I have an nginx server setup with this configuration:
location /company {
rewrite ^/company$ /company/ redirect;
rewrite ^/company/login$ /company/login/ redirect;
rewrite ^/company/(.*) /admin/$1 break;
}
What I want is redirect the URLs such as http://localhost/company/login to admin/company/login/. This works fine for a GET request. I am able to see the page.
However, when I am making an AJAX POST Request, to an url: http://localhost/company/login/login.ajax.php it is giving me a 404 Error.
My folder structure is designed in the below fashion:
project
|
|_ admin
|_company
|_login
| |_login.ajax.php
|_index.php
|_functions.js
My nginx log shows the below error:
2018/07/31 14:26:22 [error] 16966#0: *8 FastCGI sent in stderr: "Primary script unknown" while reading response header
from upstream, client: 192.168.33.1, server: 192.168.33.10, request: "POST /company/login/login.ajax.php HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock:", host: "192.168.33.10",
referrer: "http://192.168.33.10/company/login/"
Can anyone suggest how should I configure my nginx rewrite rules so that I get this done?

Related

Request on k8s NGINX ingress controller fails with "upstream sent too big header while reading response header from upstream"

When I try to send a request through my NGINX ingress with headers larger than 4k, it returns a 502 error:
[error] 39#39: *356 upstream sent too big header while reading response header from upstream,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
subrequest: "/_external-auth-Lw",
upstream: "<uri>",
host: "<host>"
[error] 39#39: *356 auth request unexpected status: 502 while sending to client,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
host: "<host>"
I've followed instructions about how to allegedly resolve this issue by configuring the proxy-buffer-size in the ingress controller (nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"), but it doesn't seem to work. The only thing that I can think of, is that it has something to do with the proxy-buffer-size of the subrequest, which doesn't seem to get set.
The proxy_buffering header wasn't being set on the /_external-auth-Lw endpoint in the NGINX config. Issue has been resolved as of v. 0.14.0.

nginx forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error

I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;

Nginx custom 404 page fails when in a higher directory

I am trying to create a custom 404 page with Nginx. My nginx.conf file looks like this:
http {
server {
location / {
root /nginx/html;
}
error_page 404 /custom_404.html;
location = /custom_404.html {
root /nginx/html;
internal;
}
}
}
When I check with a URL like "/page_that_doesnt_exist", it works fine.
But if I add a trailing "/", eg "/page_that_doesnt_exist/" or "/page_that_doesnt_exist/and_more_stuff" it fails, returning a blank screen (not the default nginx 404 page).
When I check the server messages, it tells me the following:
"GET /page_that_doesnt_exist/ HTTP/1.1" 404 280 "-"
[error] 6#6: *3 open() "/nginx/html/page_that_doesnt_exist/error404.js" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET /page_that_doesnt_exist/error404.js HTTP/1.1", host: "localhost:8080", referrer: "http://localhost:8080/page_that_doesnt_exist/"
"GET /page_that_doesnt_exist/error404.js HTTP/1.1" 404 280 "http://localhost:8080/page_that_doesnt_exist/"
Which I take to mean that instead of nginx searching for my "custom_404.html" file in my "nginx/html" folder, it is adding "page_that_doesnt_exist" to the directory. Thus it now searches in "nginx/html/page_that_doesnt_exist/" and doesn't find a "custom_404.html" file in that directory.
What am I doing wrong here?

nginx fails to use proxy_pass when handling error

I am trying to configure nginx to serve up error pages from an s3 bucket.
To that end my configuration looks like this:
location / {
error_page 404 = #fallback;
}
location #fallback {
rewrite ^ /my-s3-bucket/404.html;
proxy_pass https://s3.ap-northeast-2.amazonaws.com;
}
My expectation is that anything that hits the website and is not found is then sent to the #fallback location. I then want to rewrite the URL with the actual location of my 404 page and send on to the s3 bucket. I don't want to just 302 redirect to the 404 page.
The problem is that the proxy_pass directive is not executed. Instead, it just looks for my rewritten URL locally.
See my access logs below:
2019/01/07 03:05:42 [error] 85#85: *3 open() "/etc/nginx/html/sdfd" failed (2: No such file or directory), client: 172.17.0.1, server: www.dev.mywebsite.com.au, request: "GET /sdfd HTTP/2.0", host: "www.dev.mywebsite.com.au"
2019/01/07 03:05:42 [error] 85#85: *3 open() "/etc/nginx/html/my-s3-bucket/404.html" failed (2: No such file or directory), client: 172.17.0.1, server: www.dev.mywebsite.com.au, request: "GET /sdfd HTTP/2.0", host: "www.dev.mywebsite.com.au"
I made a request to www.dev.mywebsite.com.au/sdfd which wasn't found. 'sdfs' was rewritten to 'my-s3-bucket/404.html' but instead of then proxy passing that to https://s3.ap-northeast-2.amazonaws.com it looks for it in the local /etc/nginx/html directory.
My nginx version is 1.15.2
Use rewrite...break if you want the rewritten URI to be processed within the same location block. See this document for more.
For example:
location #fallback {
rewrite ^ /error/404.html break;
proxy_pass https://example.com;
}

How to resolve 'no live upstreams' error in nginx?

I have set up a load balancer with nginx. In here I used health check in my configuration file. when I add 'match' parameter to health check and restart the server, I got 'no live upstream' in the error log. But at that time all the other servers are available to get requests.
Why has this kind of error occurred?
My match function in nginx.conf file
match check_server {
status 200;
header Content-Type = text/html;
}
load_balancer.conf
health_check interval=2 passes=3 fails=2 match=check_server;
error in error.log:
2019/01/17 13:16:26 [error] 9853#9853: *13 no live upstreams while connecting to upstream, client: xxx.xxx.x.x, server: xxx.com, request: "GET / HTTP/1.1", upstream: "https://backends/", host: "xxx.com"

Resources